What does it do?
The service I created expects a GET request in the form of:
http://localhost:8000?text=犬&source=ja&target=en
In this case I'm translating the Japanese 犬 to the English dog.
The result of this call in the console is:
Server running at http://127.0.0.1:8000 0s, 0.054ms - Request start 0s, 147.451ms - Google response: dog 0s, 148.196ms - Response returned to caller 0s, 184.605ms - Systran response: dogThe result returned is:
{ "result": "dog", "source": "Google" }As you can see, Google is first to respond. The response from Google is returned to the client which does not have to wait for the result of Systran to come in.
If we slow down the returning of Google's response with 1 second (setTimeout), we see the following:
Server running at http://127.0.0.1:8000 0s, 0.003ms - Request start 0s, 107.941ms - Systran response: dog 0s, 108.059ms - Response returned to caller 1s, 78.788ms - Google response: dogThese are just single requests thus timing values differ slightly.
The following result is returned:
{ "result": "dog", "source": "Systran" }
How does it work?
Actually this setup is surprisingly simple using JavaScript and callbacks. The http module is used to create an HTTP server and listen on a port. The url module is used to parse the incoming request. The request module is used to create the GET request needed for SYSTRAN. See systran-translate.js (I've of course changed the API key ;). In the callback function of the server (which is called in the callback functions of the Google and Systran calls) I check if a response has already been returned. If not then I return it. If it has already been returned, I do nothing.
Below is a snippet from my main file which starts the server, calls the services and returns the response.
I've used the Google API as can be used with the node-google-translate-skidz module. Not much interesting to show here. To do the Systran translation, I've used the following code:
If you uncomment the console.log lines you can see the actual request which is being send such as: https://api-platform.systran.net/translation/text/translate?key=GET_YOUR_OWN_API_KEY&source=ja&target=en&input=%E7%8A%AC
%E7%8A%AC is of course 犬
Why is this interesting?
Suppose you are running a process engine which executes your service orchestration in a single thread. This process engine might in some cases not allow you to split your synchronous request/reply in a separated request and reply which might be received later, often making this a blocking call. When execution is blocked, how are you going to respond to another response arriving at your process? Also there are several timeouts you have to take into account such as maybe a JTA timeout. What happens if a reply never comes? This might be a serious issue since it might keep an OS thread blocked, which might cause stuck threads and might even hang the server if this happens often.
Through the asynchronous nature of Node.js, a scenario as shown above, suddenly becomes trivial as you can see from this simple example. By using a pattern such as this, you can get much better perceived performance. Suppose you have many clustered services which are all relatively lightweight. Performance of the different services might vary due to external circumstances. If you call a small set of different services at (almost) the same time, you can get a quick response to give to the customer. At the same time you might call services of which the answer might not be interesting anymore when it returns, increasing total system load.
In this example several things are missing such as correct error handling. You might also want to return a response if one of the services fails. Also, if the server encounters an error, the entire server crashes. You might want to avoid that. Routing has not been implemented to keep the example as simple as possible. For security you of course have your API platform solution.
No comments:
Post a Comment