Identifying people on photos using Azure Cognitive Services

My previous post about Azure Cognitive Services described how to detect faces on photos. In this post I will focus on identifying people on photos. This post is based on my Azure Cognitive Services sample application that has most of Face API support implemented and the goal is to describe identifying process in brief and also show some code I have written.

How Azure Cognitive Services identify people

Face API has its own mechanism to identify people of photos. Briefly saying it needs set of analyzed photos about people to identify them.

Before identifying people we need to introduce their faces to cognitive services. With brand new cognitive services account we start with creating a person group. We may have groups like Family, Friends, Colleagues etc. After creating a group we add people to group and introduce up to 10 photos per person to cognivite services. After this group must be “trained” and then we are ready to identify people on photos.

Training means that cloud service analyzes faces characteristics detected before and makes some mystery to identify better these people on photos. After adding or removing photos a person group must be trained again. Here is the example of points that define face for Face API. It’s possible to draw polyline through sets of points to draw out parts of face like lips, node and eyes.

Getting started with Face API

Before using cognitive services we need access to Microsoft Azure and we need Cognitive Services Faces API account. Faces API has also free subscription that offers more API calls than it is really needed to get started and build the first application that uses face detection.

Code alert! I am working on sample application called CognitiveServicesDemo for my coming speaking engagements. Although it is still pretty raw it can be used to explore and use Faces API of Cognitive Services.

After creating Face API account there are API keys available. These keys with Face API service end-point URL are needed to communicate with Face API in sample application.

Person groups and people

Here are some screenshots from my sample application. First one shows list of person groups and the other one people in family group.

Families group has some photos and it is trained already. Training is easy to do in code. Here is the controller action that takes person group ID and let’s FaceServiceClient to train given person group.

public async Task<ActionResult> Train(string id)
    await FaceClient.TrainPersonGroupAsync(id);

    return RedirectToAction("Details", new { id = id });

Most of calls to Faces API are simple ones and doesn’t go very ugly. Of course, there are few exceptions, like always.

Identifying people

To identify people I use one photo made in Chisinau, Moldova when my daughter was very small.

To identify who is on photo, three service calls are needed:

  1. Detect faces from photo and save face rectangles
  2. Identify faces from given person group based on detected face rectangles
  3. Find names of people from person group

Identifying is a little bit crucial as we don’t always get exact matches for people but also so called candidates. It means that identifying algorithm cannot make accurate decision which person of two or three possible candidates is shown in face rectangle.

Here is the controller action that does idenfication. If it is GET request then form with image upload box and person groups selection is shown. In case of POST it is expected that there is image to analyze. I did some base controller magic to have copy of uploaded image available for requests automagically. RunOperationOnImage is base controller method that creates new image stream and operates on it. It is because Faces API methods dispose given image streams automatically.

public async Task<ActionResult> Identify()
    var personGroupId = Request["PersonGroupId"];
    var model = new IdentifyFacesModel();

    var groups = await FaceClient.ListPersonGroupsAsync();
    model.PersonGroups = groups.Select(g => new SelectListItem
                                            Value = g.PersonGroupId,
                                            Text = g.Name

    if (Request.HttpMethod == "GET")
        return View(model);

    Face[] faces = new Face[] { };
    Guid[] faceIds = new Guid[] { };
    IdentifyResult[] results = new IdentifyResult[] { };

    await RunOperationOnImage(async stream =>
        faces = await FaceClient.DetectAsync(stream);
        faceIds = faces.Select(f => f.FaceId).ToArray();

        if (faceIds.Count() > 0)
            results = await FaceClient.IdentifyAsync(personGroupId, faceIds);

    if (faceIds.Length == 0)
        model.Error = "No faces detected";
        return View(model);

    foreach (var result in results)
        var identifiedFace = new IdentifiedFace();
        identifiedFace.Face = faces.FirstOrDefault(f => f.FaceId == result.FaceId);
        foreach(var candidate in result.Candidates)
            var person = await FaceClient.GetPersonAsync(personGroupId, candidate.PersonId);
            identifiedFace.PersonCandidates.Add(person.PersonId, person.Name);

        identifiedFace.Color = Settings.ImageSquareColors[model.IdentifiedFaces.Count];

    model.ImageDump = GetInlineImageWithFaces(model.IdentifiedFaces.Select(f => f.Face));
    return View(model);

Here is the result of identifying people by my sample application. Me, my girlfriend and our minion were all identified successfully.

My sample application also introduces how to draw rectangles of different colors around detected faces. So, take a look at it.

Wrapping up

Using Azure Cognitive services to detect faces on photos and to identify people is actually simple. ALthough there are REST services available, Microsoft provides us also with well designed API packages to use Face API services. One interesting thing to notice – image to be analyzed is in no way the matter on API client – it handles the image only as stream of bytes meaning that there are no dependencies to graphics and image processing libraries. I have working sample application called CognitiveServicesDemo available for those who want to get more familiar with Azure Cognitive Services and it is good point to start.

Liked this post? Empower your friends by sharing it!
Categories: ASP.NET Azure

View Comments (6)

  • hello, whenever I add a face I get a "image size is too small".
    I uploaded a number of different sizes but still same error. any recommendations

  • If you are using sample application then it makes images smaller automatically. You can set the minimal size from code. It's possible it has changed over time.

  • I try to solve this code and then work.

    public async Task AddFace()
    var id = Request["id"];
    var personId = Guid.Parse(Request["personId"]);

    //await FaceClient.AddPersonFaceAsync(id, personId, Request.Files[0].InputStream);
    await RunOperationOnImage(async stream =>
    await FaceClient.AddPersonFaceAsync(id, personId, stream);
    catch (Exception ex)
    ViewBag.Error = ex.Message;
    return View();

    return RedirectToAction("Index", new { id = id });

  • Thanks very nice info.
    I was wondering how to write this lino of cove on VB:
    faces.Select(f => f.FaceId).ToArray();

  • This is LINQ Select() method. Should be something like this in VB.NET:

    faces.[Select](Function(f) f.FaceId).ToArray

Related Post