Analyze a photo with Microsoft's Cognitive Services Vision API on Xamarin.Android
Microsoft’sCognitive Services are intelligent machine trained APIs that analyzed text, images, videos and more for you in the cloud and sends you back meta information about the provided content. They are free to use until a specific threshold and a nice easy to use thing to integrate in your applications. You can either use classic RESTcalls or the client SDK(NuGet) if it is available for your platform. I just stumbled on providing the right picture format to the Computer Vision API and decided to write this blog post.
So if you want to get your picture analyzed, you need to give a Stream
object to the VisionServiceClient.AnalyzeImageAsync()
method. On Android, you mostlywork with streams when accessing the file system but not when taking pictures. In these cases you often need to deal with byte[]
orAndroid.Graphics.Bitmap
objects. You can convert both to a Stream
easily, if you know how:
Converting abyte[] orBitmap to a Stream
Especially when working with the camera, you might end up with a Bitmap
object. You streamify it with the Compress()
method, but as you already used the stream once then, you need to “revert” it afterwards (that’s where I stumbled). Otherwise the SDK will throw an exception.
using (var stream = new MemoryStream()) { imageBitmap.Compress(Bitmap.CompressFormat.Jpeg, 0, stream); stream.Seek(0, SeekOrigin.Begin); var visionServiceClient = new VisionServiceClient("YOUR_API_KEY"); var visualFeatures = new VisualFeature[] { VisualFeature.Adult, VisualFeature.Categories, VisualFeature.Color, VisualFeature.Description, VisualFeature.Faces, VisualFeature.ImageType, VisualFeature.Tags }; var result = await visionServiceClient.AnalyzeImageAsync(stream, visualFeatures); }
If you already have a byte[]
you don’t need the first two lines of the using
statement and can create the MemoryStream directly out of it with var stream = new MemoryStream(yourByteArray)
.
Hope that helps somebody to save time.