Facial Recognition
Facial Recognition JavaScript

Facial Recognition with Watson – Part 2

In the second part of this tutorial, we are going to create the integration with Watson Facial Recognition API. It is pretty simple, we just need to send the image to the API and then, it will reply us with a response with the information about the image. So, let's understand how it will work since the client's request until the server's response:

  • First, the client will select an image, this image will be encode to Base64 and it is going to be send to the server.
  • Then, when the server receives this image, it will create a temporary file to save the image during the process. After that, our server is going to send this image to the API.
  • Then the API will reply us with a JSON object with all the information that we need.
  • After all these steps, our server is going to delete the temporary file and then replies to the client's request.

Implementation

Let's go there, first at all, we have to install 3 new dependencies to our server, these dependencies will help us to focus only on the goal of this tutorial. These are the dependencies:

  • Express - is a flexible Node.JS Web Application Framework, it provides us a set of features like Middlewares , using them we can create web application much faster.
  • Body Parser - is a Node.JS body parsing middleware that allows us to parse our request's body to access the data easily.
  • Watson Developer Cloud - is a Node.JS library that allows us to use Watson APIs through functions.

Let's install them now!

npm install watson-developer-cloud express body-parser

Our package.json will be like this

Creating our main Middleware

Now, we are going to start the implementation of our server, first open the index.js in our server folder, then we will create our main Middleware, it will be responsible for receiving the client request and reply it. We are going to use Express to create our Middleware, like this:

This is a simple Middleware that wait for a POST request and reply a simple text. Let's understand what this code do exactly

  • Line 4 to 5 - Here, we require the Express and initialize it.
  • Line 7 - We get the port from the environment variables, if there is not any Environment Variable called PORT we assume the default value "8080".
  • Line 9 to 12 - We create a callback function that will be called every time that we receive a POST request on the root.
  • Line 14 - We set the port that the Express must listen.

Our main Middleware is done, however, we need something to test it. Let's implement a simple client that will be able to upload an image and send it to our server.

Image Uploader

First of all, we have to install the Axios, it is a Promise Based HTTP Client for the browser and Node.JS, we are going to use it for sending requests to our server. Axios is a great HTTP client and it is easy to use. Use the command below on the client to install it:

npm install axios --save-dev

Our client's package.json must be like this:

Now, we will create a new folder called facial-recognition inside client\src. There, we are going to create a JS file called index.js and a new folder called components. Inside components, we need to create another JS file called ImageUploader, this component will upload and show the image for the client.

Let's start to create our client, first, open the ImageUploader.js and write the code below:

This component will be in charge of uploading and showing the image to the user, every time the user uploads a new image, the ImageUploader component will call an event onChange that should be implemented by the component that called it. Now, we need to implement our index.js that will render the ImageUploader and will make the HTTP request to the server. So, open the index.js and put this code:

As you can see, this component wait for a calling on the ImageUploader's onChange and then, make a HTTP request using an Axios' instance created before on the constructor. We make this request passing the Base64 image that we received from the ImageUploader. The domain's request is passed via property, but we need to be sure that it is not undefined. In that case, we used the PropTypes, it is a type checking for React props, with it we can tell to ReactJS that our serviceDomain needs to be passed via property, if it is not passed, ReactJS will display an error. You can read more about Typechecking here.

Now, we have to change the App.js to call the FacialRecognition component, our App.js must be like this:

The App.js only render the FacialRecognition and passes to it the domain of our server. Now, if we execute both server and client and then we upload an image on the client, we will receive an error message just like this:

Network error

But why it is not working? It is simple, we are trying to make a request from localhost:3000 to localhost:8080, however, the two domains are completely different. When it occurs, the browser will use CORS (Cross-Origin Resource Sharing), it is a mechanism that uses additional HTTP headers to tell the browser that the web application that is running in a domain have permission to access resources from a server in a different domain, it is our case because React-Create-App runs in a different PORT.

If you have never heard of that, you should read this great article about CORS. So, when we make the POST request, before the browser send it to the server, it sends an OPTION request to check if the server is able to receive this request. In the server, we need to reply the OPTION request, to do that, we are going to create a Middleware that will be called every time that a request arrives in the server, this Middleware will add 3 headers into our HTTP Headers, they are Access-Control-Allow-Origin, Access-Control-Allow-Methods and Access-Control-Allow-Headers. Inside our index.js in our server folder, create this Middleware to allow CORS:

This function is creating the Middleware that will allow POST requests from the domain localhost:3000, if another domain try to make a request to the server, it will be deny. The method next() will continue the execution of the other Middlewares. After do it, if we restart the server and try to upload an image again, we will see a message like this:

Successful message

Now it is really working! On the next step, we need to read the request's body and get our parameters, to do this, we are going to use the BodyParser. It is simple, we have to create a new Middleware that will call the method json() of the BodyParser. It will be something like this:

Here, we just created a Middleware that uses the method json(), this method give to us a Middleware that read and convert the request's body to a JSON object and save it in a variable inside the request. We can access our parameters using req.body. The method json() receives a JSON object with options, in our case we limited the body to 10Mb to avoid problems with big images. If you want to read more about BodyParser, you can access the documentation here.

Integration

Until this point, our application is only able to receive and reply requests, with that, we can create the Integration to process the image. Let's create the script that will make the integration with the API, inside our folder server, create a new JS file called FacialRecognition. Open the file and write the code below:

This script is in charge of getting the image to send to the service. First, it saves the image in a temporary file and then send it to the Watson Service that return to us a JSON object with the information we need.

  • Line 5 to 8 - We require the Watson API and create a Watson instance passing the data of the version (in our case we are using the newest version) that we want to use and our API Key (that one inside .env file).
  • Line 14 to 35 - The method saveTmp() saves the data on a temporary file, it receives the data that we want to save and a callback that will be call when the file is saved. When the execution of the callback ends, this method delete the temporary file from the system.
  • Line 38 - We create a new object that will be exported, this object contains the method to communicate with the API.
  • Line 40 to 46 - When the object is created, we create a new folder called tmp using the method mkdir. If the folder already exists, we just ignore it.
  • Line 48 to 83 - The method recognizeFaces() is responsible for calling the API that will detect all the faces in the image. First, it saves the image in a temporary file using the method saveTmp() and then it sends the image to the Watson API, when the API reply our request, we call the Promisses' resolve with the JSON object. If some error occur, we call the Promisses' reject passing a string with the error.

Now, we just need to change our main Middleware inside index.js to be something like that:

This code just sends the response with the HTTP code 200 (Success) and the JSON object when the operation was successfully finished or send a HTTP code 500 (Internal Error) when the operation gets some error. Now, if we execute our application and upload an image, we should receive this message:

Receiving the JSON object

And, if we take a look at the Network Window in our browser, we will see that the server's response was this JSON:

It is working perfectly now, the server is done, we just need to do some stuff on our client, but that is for the next part. In the next part, we are going to use this response to show all the faces in the image and show the age and the gender of each face too. That is it for now!

If you want to clone or fork this project, here is the Github Link. If you have any doubt, suggestion or if you see something wrong, please contact us on this email hackingloverscontact@gmail.com

See you in the next tutorial!