NB: The usual blog disclaimer for this site applies to posts around HoloLens. I am not on the HoloLens team. I have no details on HoloLens other than what is on the public web and so what I post here is just from my own experience experimenting with pieces that are publicly available and you should always check out the official developer site for the product documentation.

Following up from this earlier post;

I’d finished up that post by flagging that what I was doing with a 2D UI felt weird in that I was looking through my HoloLens at a 2D app which was then displaying the contents of the webcam on the HoloLens back to me and while things seemed to work fine, it felt like a hall of mirrors.

Moving the UI to an immersive 3D app built in something like Unity would make this a little easier to try out and that’s what this post is about.

Moving the code as I had it across to Unity hasn’t proved difficult at all.

I spun up a new Unity project and set it up for HoloLens development by setting the typical settings like;

Switching the target platform to UWP (I also switched to the .NET backend and its 4.6 support)

Switching on support for the Windows Mixed Reality SDK

Moving the camera to the origin, changing its clear flags to solid black and changing the near clipping plane to 0.85

Switching on the capabilities that let my app access the camera and the microphone

and, from there, I brought the .onnx file with my model in it and placed it as a resource in Unity;

and then I brought the code across from the XAML based UWP project in as much as I could, conditionally compiling most of it out with ENABLE_WINMD_SUPPORT constants as most of the code that I’m trying to run here is entirely UWP dependent and isn’t going to run in the Unity Editor and so on.

In terms of code, I ended up with only 2 code files;

the dachshund file started life by being generated for me in the first post in this series by the mlgen tool although I did have to alter it to get it to work after it had been generated.

The code uses the underlying LearningModelPreview class which claims to be able to load a model from a storage file and from a stream. Because in this instance inside of Unity I’m going to load the model using Unity’s Resource.Load() mechanism I’m going to end up with a byte[] for the model and so I wanted to feed it through into the LoadModelFromStreamAsync() method but I found this didn’t seem to be implemented yet and so I had to do a minor hack and write the byte array out to a file before feeding it to the LoadModelFromStorageFileAsync() method.

That left this piece of code looking as below;

#if ENABLE_WINMD_SUPPORT namespace dachshunds.model { using System; using System.Collections.Generic; using System.IO; using System.Runtime.InteropServices.WindowsRuntime; using System.Threading.Tasks; using Windows.AI.MachineLearning.Preview; using Windows.Media; using Windows.Storage; using Windows.Storage.Streams; // MIKET: I renamed the auto generated long number class names to be 'Daschund' // to make it easier for me as a human to deal with them 🙂 public sealed class DachshundModelInput { public VideoFrame data { get; set; } } public sealed class DachshundModelOutput { public IList<string> classLabel { get; set; } public IDictionary<string, float> loss { get; set; } public DachshundModelOutput() { this.classLabel = new List<string>(); this.loss = new Dictionary<string, float>(); // MIKET: I added these 3 lines of code here after spending *quite some time* 🙂 // Trying to debug why I was getting a binding excption at the point in the // code below where the call to LearningModelBindingPreview.Bind is called // with the parameters ("loss", output.loss) where output.loss would be // an empty Dictionary<string,float>. // // The exception would be // "The binding is incomplete or does not match the input/output description. (Exception from HRESULT: 0x88900002)" // And I couldn't find symbols for Windows.AI.MachineLearning.Preview to debug it. // So...this could be wrong but it works for me and the 3 values here correspond // to the 3 classifications that my classifier produces. // this.loss.Add("daschund", float.NaN); this.loss.Add("dog", float.NaN); this.loss.Add("pony", float.NaN); } } public sealed class DachshundModel { private LearningModelPreview learningModel; public static async Task<DachshundModel> CreateDachshundModel(byte[] bits) { // Note - there is a method on LearningModelPreview which seems to // load from a stream but I got a 'not implemented' exception and // hence using a temporary file. IStorageFile file = null; var fileName = "model.bin"; try { file = await ApplicationData.Current.TemporaryFolder.GetFileAsync( fileName); } catch (FileNotFoundException) { } if (file == null) { file = await ApplicationData.Current.TemporaryFolder.CreateFileAsync( fileName); await FileIO.WriteBytesAsync(file, bits); } var model = await DachshundModel.CreateDachshundModel((StorageFile)file); return (model); } public static async Task<DachshundModel> CreateDachshundModel(StorageFile file) { LearningModelPreview learningModel = await LearningModelPreview.LoadModelFromStorageFileAsync(file); DachshundModel model = new DachshundModel(); model.learningModel = learningModel; return model; } public async Task<DachshundModelOutput> EvaluateAsync(DachshundModelInput input) { DachshundModelOutput output = new DachshundModelOutput(); LearningModelBindingPreview binding = new LearningModelBindingPreview(learningModel); binding.Bind("data", input.data); binding.Bind("classLabel", output.classLabel); // MIKET: this generated line caused me trouble. See MIKET comment above. binding.Bind("loss", output.loss); LearningModelEvaluationResultPreview evalResult = await learningModel.EvaluateAsync(binding, string.Empty); return output; } } } #endif // ENABLE_WINMD_SUPPORT

and then I made a few minor modifications to the code which had previously formed my ‘code behind’ in my XAML based app to move it into this MainScript.cs file where it performs pretty much the same function as it did in the XAML based app – getting frames from the webcam, passing them to the model for evaluation and then displaying the results. That code now looks like;

using System; using System.Linq; using System.Collections; using System.Collections.Generic; using UnityEngine; #if ENABLE_WINMD_SUPPORT using System.Threading.Tasks; using Windows.Devices.Enumeration; using Windows.Media.Capture; using Windows.Media.Capture.Frames; using Windows.Media.Devices; using Windows.Storage; using dachshunds.model; using System.Diagnostics; using System.Threading; #endif // ENABLE_WINMD_SUPPORT public class MainScript : MonoBehaviour { public TextMesh textDisplay; #if ENABLE_WINMD_SUPPORT public MainScript () { this.inputData = new DachshundModelInput(); this.timer = new Stopwatch(); } async void Start() { await this.LoadModelAsync(); var device = await this.GetFirstBackPanelVideoCaptureAsync(); if (device != null) { await this.CreateMediaCaptureAsync(device); await this.CreateMediaFrameReaderAsync(); await this.frameReader.StartAsync(); } } async Task LoadModelAsync() { // Get the bits from Unity's resource system :-S var modelBits = Resources.Load(DACHSHUND_MODEL_NAME) as TextAsset; this.learningModel = await DachshundModel.CreateDachshundModel( modelBits.bytes); } async Task<DeviceInformation> GetFirstBackPanelVideoCaptureAsync() { var devices = await DeviceInformation.FindAllAsync( DeviceClass.VideoCapture); var device = devices.FirstOrDefault( d => d.EnclosureLocation.Panel == Windows.Devices.Enumeration.Panel.Back); return (device); } async Task CreateMediaFrameReaderAsync() { var frameSource = this.mediaCapture.FrameSources.Where( source => source.Value.Info.SourceKind == MediaFrameSourceKind.Color).First(); this.frameReader = await this.mediaCapture.CreateFrameReaderAsync(frameSource.Value); this.frameReader.FrameArrived += OnFrameArrived; } async Task CreateMediaCaptureAsync(DeviceInformation device) { this.mediaCapture = new MediaCapture(); await this.mediaCapture.InitializeAsync( new MediaCaptureInitializationSettings() { VideoDeviceId = device.Id } ); // Try and set auto focus but on the Surface Pro 3 I'm running on, this // won't work. if (this.mediaCapture.VideoDeviceController.FocusControl.Supported) { await this.mediaCapture.VideoDeviceController.FocusControl.SetPresetAsync(FocusPreset.AutoNormal); } else { // Nor this. this.mediaCapture.VideoDeviceController.Focus.TrySetAuto(true); } } async void OnFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args) { if (Interlocked.CompareExchange(ref this.processingFlag, 1, 0) == 0) { try { using (var frame = sender.TryAcquireLatestFrame()) using (var videoFrame = frame.VideoMediaFrame?.GetVideoFrame()) { if (videoFrame != null) { // From the description (both visible in Python and through the // properties of the model that I can interrogate with code at // runtime here) my image seems to to be 227 by 227 which is an // odd size but I'm assuming the underlying pieces do that work // for me. // If you've read the blog post, I took out the conditional // code which attempted to resize the frame as it seemed // unnecessary and confused the issue! this.inputData.data = videoFrame; this.timer.Start(); var evalOutput = await this.learningModel.EvaluateAsync(this.inputData); this.timer.Stop(); this.frameCount++; await this.ProcessOutputAsync(evalOutput); } } } finally { Interlocked.Exchange(ref this.processingFlag, 0); } } } string BuildOutputString(DachshundModelOutput evalOutput, string key) { var result = "no"; if (evalOutput.loss[key] > 0.25f) { result = $"{evalOutput.loss[key]:N2}"; } return (result); } async Task ProcessOutputAsync(DachshundModelOutput evalOutput) { string category = evalOutput.classLabel.FirstOrDefault() ?? "none"; string dog = $"{BuildOutputString(evalOutput, "dog")}"; string pony = $"{BuildOutputString(evalOutput, "pony")}"; // NB: Spelling mistake is built into model! string dachshund = $"{BuildOutputString(evalOutput, "daschund")}"; string averageFrameDuration = this.frameCount == 0 ? "n/a" : $"{(this.timer.ElapsedMilliseconds / this.frameCount):N0}"; UnityEngine.WSA.Application.InvokeOnAppThread( () => { this.textDisplay.text = $"dachshund {dachshund} dog {dog} pony {pony}

avg time {averageFrameDuration}"; }, false ); } DachshundModelInput inputData; int processingFlag; MediaFrameReader frameReader; MediaCapture mediaCapture; DachshundModel learningModel; Stopwatch timer; int frameCount; static readonly string DACHSHUND_MODEL_NAME = "dachshunds"; // .bytes file in Unity #endif // ENABLE_WINMD_SUPPORT }

while experimenting with this code, it certainly occurred to me that I could move it to more of a “pull” model inside of Unity by trying to grab frames in an Update() method rather than do the work separately and then pushing the results back to the App thread. It also occurred to me that the code is very single threaded and simply drops frames if it is ‘busy’ whereas it could be smarter and process them on some other thread including perhaps a thread from the thread pool. There are lots of possibilities

In terms of displaying the results inside of Unity – I no longer need to display a preview from the webcam because my eyes are already seeing the same thing that the camera sees and so I’m just left with the challenge of displaying some text and so I just added a 3D Text object into the scene and made it accessible via a public field that can be set up in the editor.

and the ScriptHolder there is just a place to put my MainScript and pass it this TextMesh to display text in;

and that’s pretty much it.

I still see a fairly low processing rate when running on the device and I haven’t yet looked at that but here’s some screenshots of me looking at photos from Bing search on my 2nd monitor while running the app on HoloLens.

In this case the device (on my head) is around 40cm from the 24 inch monitor and I’ve got the Bing search results displaying quite large and the model seems to do a decent job of spotting dachshunds…

and dogs in general (although it has only really been trained on alsatians so it knows that they are dogs but not dachshunds);

and for whatever reason that I can’t explain I also trained it on ponies so it’s quite good at spotting those;

This works pretty well for me I need to revisit and take a look at whether I can improve the processing speed and also the problem that I flagged in my previous post around not being able to run a release build but, otherwise, it feels like progress.

The code is in the same repo as it was before – I just added a Unity project to the repo.