Currently big tech companies like Microsoft, Google, and Amazon (to name a few) offer cognitive services on their cloud platforms.
With these services it is possible to identify faces, objects, texts, sounds, etc. Do you know how these services work internally?
The only info I could find was based on the API level. I assume the services use some neural network, which is trained by amounts of data.
In my experience the Google services are more accurate then the Azure services. Perhaps the Google services are better and longer trained?