Quantcast
Channel: Developer – TokBox Blog
Viewing all articles
Browse latest Browse all 181

Game on! Learn how to add OpenTok live video chat to Unity

$
0
0

When talking about game development, there is one name that quickly comes to mind. Unity has become one of the most popular engines that you can use if you plan to develop a game. Its multiplatform capabilities and ease of use makes it a good solution to bring your idea to life.

Like any other type of application, adding live communication features to a game is not a trivial thing. There are plenty of complicated problems to solve. OpenTok comes to the rescue in most scenarios and adding video chat to Unity game development is no exception.

In this blog post we will describe how you can integrate the OpenTok SDK into a Unity-powered game targeted to the Windows platform, using the recently released Opentok Windows SDK.

Along with the blog post, we have published a working sample that implements what we describe in this article. Please refer to it for more details. 

Adding live video chat to Unity – the big picture

Seen from a distance, the task of displaying live video in a Unity element looks pretty simple. OpenTok SDK can provide you with a continuous stream of video frames from the session participants. Once you have a video frame, you just need to display it in a Unity element.

The first question could be, “How can I display real-time video in a Unity element?” Unity has support for displaying video clips, but live video is somehow different. We’ll use Unity Textures to draw the frames of the video stream and will assign that texture to a Unity element. More precisely, we’ll use an instance of Texture2D class.

If we want to display around 20 frames per second we need to be able to draw the video frames onto the texture as fast as we can, so we’ll use a low-level Unity native plugin to access the texture using the rendering APIs available in each system. In other words, we’ll use DirectX (9 or 11), or OpenGL to dump the video frames coming from the OpenTok SDK to the Unity texture.

Summarizing, in order to receive frames, we will build a custom OpenTok renderer. Once we have a frame, we will feed the low-level plugin with that frame. In its own lifecycle, the low-level plugin will receive a notification from the Unity side whenever the texture needs to be redrawn (usually at 30fps). It will get the last frame set by the custom renderer, and will draw it using DirectX APIs. See the diagram below for more details.

Add OpenTok video to chat to Unity

Using the OpenTok Windows SDK in Unity

Using OpenTok SDK when doing a Windows application in Unity could not be easier. All you  need to do is download the SDK and copy the dll files into your Assets/ folder.

Once the files are there, Unity will automatically recognize that OpenTokNET.dll is a .net assembly. It will allow you to import the OpenTok package and use the whole API.

Since OpenTok for Windows offers a .NET interface, you will be able to create a session, create publisher and all the rest of the OpenTok capabilities from you c# script. In our sample, most of the OpenTok session related code is under OpenTokSession.cs file.

The other important part of the Unity integration is the creation of the custom video render that will receive the video frames from the session, that lives in the VideoRender.cs class. The VideoRender class is an implementation of the Opentok IVideoRender interface.

Using a native plugin in Unity

C/C++ development  is usually the best way to tackle some problems. Whether for performance or because you need to have some component running in different platforms, you’ll need to do some native development.

Unity supports native plugins that your project can call by using a managed bridge. For this sample, we used a native plugin in order to get the best performance when drawing the video frames onto the Unity texture.

Our native plugin has the responsibility of receiving the texture references from the Unity side. It also receives the frames to draw, and draws them. Let’s take a look at the main functions of the plugin.

The plugin is a separate project inside the whole solution. It will be built to a dll file that we’ll use later in the Unity project. In order to get more details about building the plugin, please refer to the README file of the sample repo. (Add link to sample)

CreateRenderer

Unity will call this function whenever it needs a new renderer. Internally the plugin can support several renderers. Each renderer is represented by the following structure:

typedef struct UPRenderer {
    HANDLE render_mutex;    
    void *last_frame;
    void *texture;
    int frame_height;
    int frame_width;
    int texture_width, texture_height;
} UPRenderer;

In summary, we need to save a reference to the Unity texture, a reference to the last frame, and the size of both the frames and the texture in order to keep track of changes in any of their sizes. (Live video resolution can change according to things like network conditions or CPU usage).

This function will return an unique ID that the rest of the plugin’s functions will use. 

SetRendererTexture

Textures are created and assigned to the target element from Unity side. We will use this function to send the reference of the Texture2D created in the Unity side to the plugin. It will just save its reference inside the Renderer struct

ShouldCreateNewTexture

Since all the video frames could not be the same size, and a Texture2D is created with a fixed resolution, we need to know when the Texture does not fit the received video frame. The Unity project will call this function to know if it needs to create a new one because the size of the frame and the size of the texture is not the same.

SetRenderFrame

As you can tell by its name, this function saves a reference to a video frame. The plugin will use that reference later to draw it on the texture.

OnRenderEvent

Once we have all the elements in place, we need to use our saved references to the texture and the frame to draw them. This function will be called by the render cycle, and will use the Render function to draw the saved frame by using DirectX api. Take a look at the linked sample for more details.

Calling the plugin from the Unity Project

All these functions will be accessed by C# side using a simple bridge taking advantage of Platform invoke methods

public class RenderPlugin
 {
    [DllImport("RenderPlugin", EntryPoint = "CreateRenderer")]
    public static extern int CreateRenderer();

   [DllImport("RenderPlugin", EntryPoint = "SetRendererFrame")]
    public static extern int SetRendererFrame(int rendererId, IntPtr frame,
            int width, int height);
    // …
 }

The Render Loop

Until here, we have described just one side of the story. We have all the OpenTok-related code that will connect to a session, publish your video and subscribe to the session participants. That code will also receive the video frames of each video stream and will send the frames to the native plugin that will, at some point, draw them onto a Unity Texture. But we’re missing a key component: we need to integrate all these in the Unity lifecycle of an application.

You will be familiar with the Unity way of work. Imagine that you have a 3D cube: you usually attach a component which is a subclass of MonoBehaviour to your cube when you want to do something with it. That MonoBehaviour subclass has a Start and an Update method that the Unity engine will call every time it is going to render a new frame. We need to let our native plugin know that it needs to render its saved frame in its saved texture at this point.

Our sample has a class, OpenTokRenderer that will play this role. In the sample, we add this component to the object where we want to render the video stream. In its Update() method, we will tell the plugin that it needs to draw the frame. Unity uses a built in function in order to do so. You can see this in the last line of the Update method shown below:


void Update()
{
  int newWidth = 0, newHeight = 0;
  if (RenderPlugin.ShouldCreateNewTexture(rendererId,
    ref newWidth, ref newHeight) != 0)
  {                     
    texture = new Texture2D(newWidth, newHeight, TextureFormat.BGRA32, false);
    RenderPlugin.SetRendererTexture(rendererId, texture.GetNativeTexturePtr(),
         newWidth, newHeight);
    GetComponent<MeshRenderer>().material.mainTexture = texture;
  }
  GL.IssuePluginEvent(RenderPlugin.GetRenderEventFunc(), rendererId);
}

add Opentok webrtc video to a Unity game

We will add an instance of OpenTokRenderer to each GameObject that will render the video stream of an OpenTok session participant.

Get building with Unity and OpenTok

As we have seen, adding OpenTok live video chat to Unity is perfectly doable. Given the possibility of writing native code, you’ll get the same performance as you get from other usages of OpenTok. For a complete working solution, please take a look at the sample code that we have written to illustrate this blog post. The sample has been written with reusability in mind, so you can use OpenTokRenderer, OpenTokSession classes and the native plugin in your current game or Unity project.

 

The post Game on! Learn how to add OpenTok live video chat to Unity appeared first on TokBox Blog.


Viewing all articles
Browse latest Browse all 181

Trending Articles