Quantcast
Channel: Developer – TokBox Blog
Viewing all 181 articles
Browse latest View live

Zenbag cruises to victory with OpenTok at CruzHacks

$
0
0

One of my favorite parts of being a developer evangelist is getting to meet fellow developers and share their journey to building awesome products. And there’s nowhere better to meet developers than a hackathon!

Recently, I had the opportunity to attend CruzHacks at my alma mater, the University of California Santa Cruz. It was great to be in a position where I could support an amazing event through sponsorship, mentoring, and judging.

I recruited my colleague Hashir to join me and we came up with a plan for how we could help the participants make the most of the weekend.

Introducing OpenTok to the hackers

The weekend was non-stop but we had a total blast. Hashir and I hosted two workshops to introduce participants to the OpenTok platform and the endless possibilities of live video. We worked with students on setting up Video Chat Embeds and the OpenTok API depending on their experience level. We also interacted on a more personal level with many developers ranging from students who didn’t know how to start a server to experienced developers who dockerized their server instances.

opentok workshop at the hackathon

I was really impressed with the wide range of ideas the students came up with for using live video. Education was one of the most popular use cases. Developers built applications ranging from 1:1 tutoring apps to full on virtual classrooms where teachers had the option to ‘kick’ students out the room! We also saw an app for doctor-patient interaction, an app for connecting to someone speaking a foreign language, and an app which can only be described as “Chatroulette for Venting and Roasting”.

A well deserved win for an OpenTok team

After a whole weekend of hacking, we were really proud that when it came to the final judging, one of the teams using OpenTok emerged victorious. Zenbag created an AR paint brush using ARKit on iOS and published their screen where users on a browser could see what they’re drawing. They then used the OpenTok Signaling API to send the drawing data and recreated it on the VR app using OpenGL.

Hackathon closing ceremony

The Zenbag team is comprised of Max, from San Jose and Spencer, all the way from Wisconsin. Their meeting was meant to be, after they both misread the instructions for the hackathon and turned up at 8am instead of 8pm! I caught up with Max and Spencer to find out about their experiences at the Hackathon.

The CruzHacks hackathon experience

Team Zenbag at the hackathon

They both said that they chose to use the TokBox API to try something new. and because of the simplicity of integrating it with their project. They took advantage of some of the sample apps available, such as ARKit and ScreenSharing on iOS.

As they only had 48 hours to develop their app, there is still plenty left to play around with. If they were to continue developing, they would try having more than one feed coming into the app at once. Perhaps they would also find a way to be able to walk around the cube and see four different people drawing.

One of the topics we chatted about was favorite hackathon foods and they both chose chocolate, but Max specifically loved Awake chocolate, aka a caffeinated chocolate bar.  

Hackers’ heroes

Of the many things we discussed, we talked about video chatting with famous people. Spencer mentioned that he would love to video chat with Elon Musk, Larry Page and Sergey Brin because he found them very inspiring. He found Larry and Sergey particularly inspirational as they were grad students who were just having fun but managed to grow their company into one of the biggest in the world.

Elon Musk was on Max’s list too, along with Bill Gates and Peter Thiel. He said he found it amazing how they all have different political viewpoints but work together in the same industry and share a belief in getting the fundamentals right and growing from there.

Overall, Hashir and I had a great weekend. We left feeling inspired by the creativity and energy shown by all the students. I’m really looking forward to meeting more developers at HackMentalHealth, but in the meantime, why not have a go yourself? Our Video Chat Embeds are really easy to get up and running in no time at all, and our Developer Center is full of resources for building applications with our API!

The post Zenbag cruises to victory with OpenTok at CruzHacks appeared first on TokBox Blog.


New web sample apps for OpenTok in React, Angular 5 and Vue

$
0
0

We get a lot of requests from our customers for examples of how to use OpenTok in their framework of choice. I’m here to tell you today that we are answering your pleas in 3 of the most popular Web frameworks out there: React, Angular 5 and Vue.js

You can find the new sample apps in our Web Samples Github Repository.

React

react logoThe React Basic Video Chat App uses a popular OpenTok React component which can be found at http://github.com/aiham/opentok-react. The nice thing about using an existing component is that there is not a whole lot of code involved. The bulk of the code is some simple JSX.

<div>
  <div>Session Status: {connection}</div>
   {error ? (
     <div className="error">
       <strong>Error:</strong> {error}
     </div>
   ) : null}
   <OTSession
     apiKey={apiKey}
     sessionId={sessionId}
     token={token}
     onError={this.onSessionError}
     eventHandlers={this.sessionEventHandlers}
   >
     <button onClick={this.toggleVideo}>
       {publishVideo ? 'Disable' : 'Enable'} Video
     </button>
     <OTPublisher
       properties={{ publishVideo, width: 50, height: 50, }}
       onPublish={this.onPublish}
       onError={this.onPublishError}
       eventHandlers={this.publisherEventHandlers}
     />
     <OTStreams>
       <OTSubscriber
         properties={{ width: 100, height: 100 }}
         onSubscribe={this.onSubscribe}
         onError={this.onSubscribeError}
         eventHandlers={this.subscriberEventHandlers}
       />
     </OTStreams>
   </OTSession>
 </div>

Angular 5

angular 5 logo

With the Angular 5 sample application we wrote some simple Publisher and Subscriber components along with unit tests. The components are used in the app template like so:

<app-publisher [session]="session"></app-publisher>
 <app-subscriber *ngFor="let stream of streams" [stream]="stream"
[session]="session"></app-subscriber>

A publisher is displayed on the page first. A new subscriber template is created for every stream in the streams array.

Also included in the app are some integration tests written for Protractor which make sure that the Publishers and Subscribers load successfully when you have 2 participants.

Vue.js

vue logo

The Vue.js sample application also takes a component based approach to building an OpenTok application. There is a simple Publisher component and a simple Subscriber component. Then the template for the Session component displays them like so:

<div id="session" @error="errorHandler">
   <publisher :session="session" @error="errorHandler"></publisher>
   <div id="subscribers" v-for="stream in streams"
:key="stream.streamId">
     <subscriber @error="errorHandler" :stream="stream"
:session="session">
</subscriber>
   </div>
 </div>

Time to get building!

Now, no matter what your framework of choice is, OpenTok is the platform of choice for your WebRTC application. You’ll find all the resources and links to sample apps you need in our Developer Center, and you can sign up here for a trial on the platform including $10 free credit.

The post New web sample apps for OpenTok in React, Angular 5 and Vue appeared first on TokBox Blog.

OpenTok version 2.13: What’s new and how you can use it

$
0
0

We recently released the latest version of our Client SDKs, v2.13.0 and I wanted to share some of the great new features that have gone out with this release.

Custom Media Streaming

With v2.13.0 of opentok.js we have added the ability to pass a custom audioSource and videoSource when you create a Publisher. The custom audio and video source are MediaStreamTrack objects. This enables quite a few different use cases that our customers have been asking for.

Publishing from a Canvas Tag

The simplest thing that this enables is the ability to publish the contents of a Canvas element. This means that anything you can draw on a Canvas you can now stream into an OpenTok Session and you can even archive with our Archiving API. Examples of where this might be useful are:

  • Streaming a shared whiteboard
  • Streaming the contents of an online game
  • Animoji video chat

To see a simple demo of this in action check out our new Publishing from a Canvas sample application.

Stream Filters

stream filters opentok SDK v2.13You can also use the custom videoSource to apply your own filter to the video. This filter is on the actual underlying video so the result will show up in your archives as well. To see a demo of this in action check out the new Stream-Filter sample application.

You could also use this same technique to solve other interesting use-cases like:

  • Adding branding or a watermark to the video
  • Adding subtitles to a video
  • Face detection

Publishing from a Video Tag

publishing from a video tag OpenTok SDK v2.13

In Chrome 53+ it’s also possible to call captureStream on an HTMLMediaElement. This means that you can capture video and audio from a Video element that you are playing or audio from an Audio element. This means that people can watch videos or listen to music together!

To see a demo of this in action check out the new Publish-Video sample application.

H.264 Support

In June last year we announced Beta support and in September we announced full support for Safari 11. With Safari the one caveat is that it only supports H.264 as the video codec. We have been working tirelessly to make all of our endpoints support H.264 so that we have full compatibility with Safari. With v2.13.0 our Android SDK, Windows SDK and our plugin for Internet Explorer 11 all now support H.264. This means that all of our clients can now stream video to and from a Safari endpoint. All the more reason to opt-in to a Safari project.

Publisher Stats

publisher stats opentok SDK v2.13

We have supported getStats on a Subscriber for quite some time now and have written some articles about how best to use those stats. Sometimes though it’s hard to tell when looking at the stats whether you are getting bad performance because of the Publisher side or the Subscriber side.

With v2.13.0 of opentok.js we have added getStats to the Publisher. This means you can tell whether end users have a poor uplink on their internet connection as well as telling that the downlink is poor.

Stereo Audio Support

We have seen some really interesting use-cases for improved audio quality on our platform including remote stethoscopes. With v2.13.0 we are adding one more feature to our Audio Tuning API, the ability to stream in stereo audio. This means that if you are using the aforementioned Custom Streaming API to stream a video or audio file, or if you have a stereo microphone, you can stream that audio in stereo.

To see an example of this in action check out our new stereo audio sample application. This sample actually combines 2 of our new features in v2.13.0, Custom Media Streaming and Stereo Audio Support.

Fixed Issues

As well as adding all these great new features in v2.13.0, we have also resolved some issues. For a full list of the features and fixes in our client SDKs have a look at the release notes.

The post OpenTok version 2.13: What’s new and how you can use it appeared first on TokBox Blog.

Putting our heads together for solutions for mental health

$
0
0

Co-authored by Manik Sachdeva, TokBox Developer Evangelist.

When we talk about health, it’s often physical health which is at the forefront of our plans. However, mental health is equally important, but often takes a back seat. It’s perhaps not surprising: mental health is not well understood by the general public. Unfortunately it can come with a big dose of stigma attached as a result.

To top it off, it can be difficult to find a professional to help overcome challenges. Even when you do have access to a qualified clinician, the cost can be prohibitive.

A major health issue

So how much of an issue is mental health really? Well, according to the charity MIND, 1 in 4 adults in the UK will experience a mental health issue in a year, while research shows that mental health problems are one of the main causes of the overall disease burden worldwide.

There is also an increasing focus on the role that technology plays in causing or exacerbating mental health issues. Many of us will be familiar with the feeling of stress from always being available via phone or email.

Some recent research by MIT shows a connection between smartphone and social media use and mental health issues, such as depression, in young people. This area of study is relatively new. We still have a lot to learn about how one thing affects the other but it certainly seems that our increasing reliance on tech is detrimental to our mental health.

Can tech help solve the problem?

But the tech world is famous for focusing on a problem until a solution is found. So could there also be a role for technology in ensuring that people with mental illness are able to access the services they need and get better?

The organizers of the Hack Mental Health hackathon certainly think so. This event brought together mental health professionals, technology providers and people with first hand experience of mental illness to help drive innovation in the mental health space. It was an intensive weekend of putting heads together to find solutions for those affected by mental health issues. TokBox attended as a sponsor to support the efforts of the hackers.  

Removing barriers to accessing help

Increasingly, we see live video being used to deliver all kinds of medical services. This includes primary care consultations, second opinions and specialist referrals. Our live video survey shows that far from being a niche technology reserved to a few specialist providers, telehealth applications with live video are offered widely and well received by patients.

In fact, 60% of our survey respondents said that they have or would be likely to use live video chat to talk to a doctor about a non-emergency condition. One of the reasons that live video has caught on so quickly in healthcare is that it offers a convenient and cost-efficient way to deliver care.

According to research carried out by One Medical group, the biggest obstacle to seeking help from a clinical professional for mental health issues is cost. So if live video can really begin to break down those cost barriers, it could have a big impact on improving access for patients.

Learning from each other

PeerLearn winners of TokBox API prize

With help from our very own developer evangelist, Manik Sachdeva, several teams at the hackathon decided to try out the TokBox API. We know from customers such as InTheRooms that live video can work really well for hosting therapy sessions, and that group sessions are a common way to approach treatment.

So we really like the solution which PeerLearn came up with using our API. Their solution connects two users over video to practice mental health exercises. This could be a short meditation or an exercise in assertiveness. The PeerLearn team used the OpenTok-React library which allows developers to use JSX components with the OpenTok API. They also deployed a server that used the OpenTok Node SDK to power their dynamic room and session generation.

Jesse from the team has written a great post about his experience of the event and what made it different to other hackathons.

For more information on our server SDKs or client libraries, please visit our developer center: https://tokbox.com/developer

 

The post Putting our heads together for solutions for mental health appeared first on TokBox Blog.

Build live video mobile apps with OpenTok React Native

$
0
0

Over the past few weeks, I’ve been working on OpenTok React Native. As the name suggests, it’s a React Native library for OpenTok. As I work with developers at hackathons and other events, I’ve had lots of questions about React Native and OpenTok. So on April 11th I’ll be hosting a webinar where I’ll build iOS and Android applications with live video using OpenTok and React Native. 

Register for Webinar

Adding live video to mobile apps

Using this OpenTok Labs solution, you can easily add live video to your iOS and Android applications by using just a few JSX components.

<OTSession apiKey='your-api-key' sessionId='your-session-id' token='your-token'>
     <OTPublisher style={{ width: 100, height: 100 }} />
     <OTSubscriber style={{ width: 100, height: 100 }} />
 </OTSession>

One of the benefits of using OpenTok React Native is that the API is very similar to OpenTok React, a web component for adding live video. This means that you don’t have to learn new APIs. In fact, you can reuse the majority of the OpenTok code between your web and mobile applications.

What is React Native?

React Native is a framework, backed by Facebook, that allows you to build native mobile applications using JavaScript and React. The benefit of using React Native is that you build a native application, as opposed to a hybrid application when you use other frameworks such as Cordova, Ionic, etc.

React Native comes with native APIs that allow developers to leverage hardware features and fundamental native views using JavaScript. With this in mind, we wanted to explore an OpenTok solution so React developers can add live video to their mobile applications quickly and easily.

OpenTok React Native Library

This library is written in JavaScript, Swift, and Java. We used Swift and Java so we could build on top of the existing OpenTok iOS and Android SDKs. This allowed us to leverage the SDKs and only write bridges so developers could use the supported libraries under the hood.

OpenTok React Native Samples

If you’re interested in using OpenTok React Native, take our OpenTok React Native Samples repo for a spin. In this repo, you’ll find the following sample applications:

These sample applications will walk you through how to publish, subscribe to multiple streams, and use the Signaling API to create a text chat.

This is an OpenTok Labs library. We highly encourage you to send pull requests and file issues so we can improve the library together. If you would like to contribute to the library or the samples, please see the repo contribution guidelines.

If you’re interested in learning more about OpenTok React Native, remember to sign up for my webinar on April 11th . Looking forward to seeing you there! 

Register for Webinar

We are also sponsoring the Reactathon Hackathon on March 24-25 in San Francisco at GitHub HQ – sign up to hack and I’ll be there to help with your coding!

The post Build live video mobile apps with OpenTok React Native appeared first on TokBox Blog.

Putting OpenTok React into practice at Reactathon

$
0
0

This past week, my colleague Aaron and I had the opportunity to attend the Reactathon Advanced Conference. The conference featured some great talks, many of which included React Native, GraphQL, and WebAssembly. In addition to the conference, Reactathon also hosted a hackathon, which TokBox supported by sponsoring. It was great seeing the community that loves this great framework come together and share their knowledge.

Making winning connections with React

At the hackathon itself, we saw a total of 19 projects, 17 of which used the OpenTok API. Many of the hackers loved the idea of connecting people all over the world by using video in a contextualized manner.

This sparked several e-learning solutions along with a few social applications all geared towards connecting individuals that shared a similar interest. Noobvolution, an eSports platform, aimed to connect gaming coaches and students who wanted to get 1:1 training over live video.

OpenTok React hackathon winner - elearning for gaming

Noobvolution, an eSports platform, connected gaming coaches and students for 1:1 training over live video.

The team also incorporated features such as screen sharing, text chat, and annotations to help collaborate. They also planned on adding OpenTok archiving feature so students could revisit these recorded sessions in the future and have them as reference. It was a great end to the hackathon witnessing such a great product come in first place.

Learning with live video

Study Buddy, another e-learning solution, created an online platform for individuals interested in on-demand learning. Additionally, we saw Team Assemble use the Live Interactive Broadcast feature and create a product where meetup organizers could broadcast talks so people everywhere could attend regardless of physical location of the event. They also recorded these sessions so people could watch later as well.

With a host of applications built for learning or studying, it’s clear that innovation in the e-learning space has a long way to go, and React and OpenTok can play a part in that.

Combining Live Video with Facial Recognition

A group of students from San Diego created Mème Brûlée, a web application where individuals get connected to a video chat room and caption memes together. While the user is captioning a meme, their platform captures a screenshot of the user’s face using OpenTok and uses Microsoft’s Face Recognition API for sentiment analysis. The inspiration for this product was to create a fun game where they can map the reactions of the users looking at the memes to emojis.

Since this was a hackathon with a focus on React.js, a JavaScript library for building user interfaces, most of the hackers used OpenTok React, a web component for the OpenTok JS SDK.

If you’re interested in creating mobile applications, you can also use React Native with OpenTok. Check out my post about it here, then sign up for register for my OpenTok React Native webinar on April 11th where I’ll be walking you through creating a live video app for iOS and Android with React Native.

Register for Webinar

 

The post Putting OpenTok React into practice at Reactathon appeared first on TokBox Blog.

Building a text chat for Android and iOS with OpenTok React Native

$
0
0

As you may know, we recently announced an OpenTok Labs solution for adding live video and messaging to your React Native application. In this post, we’d like to take a deeper look at how to use the OpenTok Signaling API to build a text chat application with React Native.

OpenTok Signaling API

The OpenTok Signaling API allows developers to send messages from one OpenTok endpoint to another. One of the most common use cases for the Signaling API is a text chat. Below, we’ll walk through how to build a text chat for both iOS and Android applications using React Native.

OTSession Component

The OTSession React Native component allows you to connect to an OpenTok Session, set session event handlers, and send signals. You can connect to the session by using your API Key, Session ID, and Token which generated by the OpenTok Server SDK or in your TokBox account dashboard. This information will be passed to the OTSession using the following props:

  • apiKey
  • sessionId
  • token

Props are a way to pass data into a React component so the component can utilize the data to render a customized component. Similar to the credentials, you can pass the signal information using the signal prop. The signal prop takes in an object with two keys, type and data, both of which have a string type.

Using the information above, our OTSession component will look like the following:

<OTSession 

    apiKey=”your-api-key” 

    sessionId=”you-sessionId” 

    token=”your-token” 

    signal={

        {

            type: ‘signal’,

            data: ‘some random message’,

        } 

    }

/>

Session Event Handlers

Now that you have a way of sending signals, you can pass in session event handlers to listen for any incoming signals. To do so, let’s create an object called sessionEventHandlers where we listen to the signal event. In our case, we are only listening to the signal event, but you can choose to listen to other session events as well.

sessionEventHandlers = {

    signal: (event) => {

        // We can use event.data to get the message that we recieved.

    },

};

We can now use this object and pass the event handlers into our OTSession component along with the credentials and signal information. 

<OTSession 

    apiKey=”your-api-key” 

    sessionId=”you-sessionId” 

    token=”your-token”

    eventHandlers={sessionEventHandlers}

    signal={

        {

            type: ‘signal’,

            data: ‘some random text message’,

        } 

    }

/>

Sending Additional Signals

Now that we’ve set up how to send and receive signals, we want to be able to send multiple signals. We can do this by passing the signal information to the signal prop again. Behind the scenes, the OTSession component watches for updates to the props so each time we update the signal object, a new signal will be sent. To use this effectively, we can utilize the setState method that React provides.

Lastly, we want to be able to distinguish from the signals we send from the ones we receive. To accomplish this, we can compare our connectionId to the connectionId that is associated to the signal event. The OTSession component has a method called getSessionInfo() which has important information about the session such as the session ID, conection ID, connection data, and connection creation time. Using Refs, we can expose the getSessionInfo() method like so:

<OTSession ref={(instance) => { this.session = instance; }} />

This sets session to the OTSession instance where we can call methods available in the OTSession component. Keep in mind that it’s not recommended to use this imperative style of exposing methods in React unless you can’t do it declaratively.

Keeping track of the conversation

Now that we’re able to distinguish between incoming and outgoing messages, we would like to keep track of all the messages. We can do this by initializing an empty array called messages and update it each time we send or receive a signal.

Since we have all of the information we need, we can now update our sessionEventHandlers to look like the following:

sessionEventHandlers = {

    signal: (event) => {

        const myConnectionId = this.session.getSessionInfo().connection.connectionId;

        const oldMessages = this.state.message;

        const messages = event.connectionId === myConnectionId ? [..oldMessages, { data: `Me: ${event.data}`] : [..oldMessages, { data: `Other: ${event.data}`];

        this.setState({

            messages,

       });

    },

};

We use the spread operator above to create a new array of messages so we don’t mutate the existing state.

After setting the logic, we can use the following React Native UI components to render the text chat view:

  • TextInput
    • Used for an input box to show the message that you’re typing
  • Button
    • Used for sending the message
  • FlatList
    • Used to display the outgoing and incoming messages

To see all this in action, please check out the OpenTok React Native Signaling sample. Now you know how to build a text chat app, why not sign up for my React Native webinar on April 11th to learn more? Sign up here:

Register for Webinar

The post Building a text chat for Android and iOS with OpenTok React Native appeared first on TokBox Blog.

With Microsoft Edge, OpenTok Is Now Supported on All Major Browsers

$
0
0

Late last week, Microsoft released the April Update for Windows 10. This update contains the latest version of the Edge browser, Edge 17. We are happy to announce beta support for the Edge browser across our live video platform.  All the great features of the OpenTok platform are now available in beta with Microsoft Edge!

We have been tracking the progress of Edge over the last 12 months, testing the Windows Insider builds, and providing feedback to Microsoft. We were delighted when Microsoft introduced support for the WebRTC 1.0 APIs in the 2017 Spring release and we applaud them for the improvements they have made since then.

Microsoft continues to dominate the desktop and laptop operating system market with roughly 88% market share. The mix of Windows flavors has changed over the last 4 years with Windows 10 continuing to gain market share mainly at the expense of Windows XP and Windows 8, but also from Windows 7. Given that Edge is the default browser on Windows 10 and Microsoft is not investing further in IE 11, we expect to see increased usage on Edge as Enterprises migrate from Windows 7 to Windows 10.

Edge was released 3 years ago and is very much the new kid on the block compared to its older competitors. During the 3 years the market share for Edge has been ticking upwards as the market share for Internet Explorer and Firefox have both fallen. Chrome has made the biggest gains during this period and remains the top dog.Chrome and Firefox were the WebRTC pioneers and the first browsers we supported on our platform. To meet the needs of of our Enterprise customers we released the OpenTok plugin for Internet Explorer in 2014. Last October, we launched our support for Safari and today we round out our support for the top 5 web browsers with our Edge beta program.

Over the coming months we will continue to test Edge in production and provide feedback to Microsoft to ensure our customers and the end-users of the OpenTok platform have the best experience possible. Feel free to reach out to us with your feedback or contact us to learn more. 

The post With Microsoft Edge, OpenTok Is Now Supported on All Major Browsers appeared first on TokBox Blog.


OpenTok version 2.14: What’s new and how you can use it

$
0
0

Last week, we released OpenTok v2.14, the latest version of our Client SDKs. We wanted to update you on some of the great new features included and how you can use them.

With the Frame Metadata API, cycleVideo and facingMode APIs and other features, there is plenty to make it even easier for you to build great live video apps on OpenTok. 

Frame Metadata API

The new Frame Metadata API provides developers with a simple way to add metadata to each frame as it is captured. The metadata is embedded with the video stream so that it arrives at the receiver at the same time. The subscriber to the stream can extract this metadata and use it to enhance the experience of the end user.

The metadata can be used for any purpose the developer deems fit. It could be used to simply carry a timestamp or some other test data. It can also be used in a more sophisticated AR application to capture the location of the camera in 3D space.

AR use case for Frame Metadata API on OpenTokAnother use is with a frame level graphics transform on a region of interest to enhance the video quality of that region. Our CTO, Badri Rajasekar, gave a presentation at last year’s Kranky Geek Event that demonstrated using this technique to perform “content-aware video encoding.” An app can detect areas where faces are in a video stream and provide more pixels to those areas compared to the other background areas. The app can add metadata to each video frame to indicate the areas that contain faces. It can then use this metadata to properly render the video in subscribing clients.

The Frame Metadata API is available in the OpenTok iOS, Android, and Windows SDKs.

You can find sample applications using the Frame Metadata API here:

cycleVideo and facingMode APIs

Our JavaScript SDK now supports a facingMode property when you call OT.initPublisher(). This lets you specify whether you want the front (“user”) or rear (“environment”) facing camera when using OpenTok on a mobile device. We have also added a cycleVideo() method on the Publisher which allows you to switch between the front and rear facing cameras seamlessly. We believe that these changes will help our developers to build better experiences on mobile browsers (Safari on iOS as well as Chrome and Firefox on Android).

Codec not supported Event

With Safari’s lack of VP8 support and with Chrome on Android not always supporting H.264, there are times when you will not be able to see the video of the other participant. When this happens, OpenTok.js displays the “Video format not supported” message (shown above) to end users. We now also surface an Event to our developers in case they want to supply their own UI or provide more details to the end user. The videoDisabled Event in the Subscriber now has a new reason (“codecNotSupported”) to account for this.

Other bug fixes and performance improvements in OpenTok v2.14

  • The OpenTok plugin for Internet Explorer now supports the setAudioVolume() and getAudioVolume() methods of the Subscriber.
  • The audioLevelUpdated Event now works in Safari on iOS without a user gesture. This is actually not something we fixed but another example of us working closely with a browser vendor, in this case Apple, to get an issue fixed and shipped in the browser.
  • We found and fixed some memory leaks in OpenTok.js that will improve performance and prevent crashes in long running sessions.
  • We did some refactoring of OpenTok.js to remove some race conditions from the code. This has resulted in a reduction in the number of errors we’re seeing!
  • We also made some changes in our native SDKs (iOS, Android, and Windows). These will result in performance improvements, including lower CPU and memory usage.

If you’d like to start building with some of these features, sign up for your free OpenTok account here.

 

The post OpenTok version 2.14: What’s new and how you can use it appeared first on TokBox Blog.

Enhance Live Streaming and Recording Capabilities with HD

$
0
0

Today, we’re excited to bring support for 720p HD to our Recording and Live Streaming features. With this release, our APIs will provide developers an option to record OpenTok and live stream OpenTok sessions in high definition, and as a result create more engaging live video experiences.

Live video sessions on OpenTok already support 720p, but until today, customers could only record and live stream at 480p. With a growing number of education, webinar and media and entertainment customers, demand to support HD resolution for recording and live streaming has been gaining momentum and we are pleased to finally make this available for our customers.

The evolution of the OpenTok platform

A quick jog down memory lane to show how our feature-set has evolved beyond core video and voice:

 Evolution towards HD Recording and Live streaming on OpenTok platform

Through these capabilities, customers have benefitted from unmatched audience reach and creating interactive experiences to engage millions of viewers at scale. Together with our industry leading core video & voice offering, these capabilities have made TokBox the one-stop-shop to meet all your live video needs.   

Recording with OpenTok

OpenTok offers a variety of video recording options to fit our customers’ use cases and businesses.

  • Composed Archiving: Allows you to record OpenTok sessions in a single MP4 file composed of all streams and is optimized for instant playback (this is a major differentiator of our platform).
    • Archives are available immediately after the session is complete. Customers can customize the layout and select resolution of the composed archive to be SD (640×480) or HD (1280×720).
    • The majority of our customers use this because of convenience and ease of use. This includes: 
      • E-learning courses, tutoring sessions and online seminars 
      • Large-scale interactive broadcasts
      • Healthcare and financial services with security requirements
  • Individual Stream Archiving: Creates multiple individual media files for each stream. This gives customers complete control over post-processing, serving use-cases with the need for advanced cognitive services, such as: 
    • Sentiment analysis
      • Jargon.ai transcribes, records and analyzes conversations. They leverage our Individual Stream Archiving to analyze individual media files for an in-depth analysis. 
    • Audio transcription
    • Voice/object recognition
  • Encrypted Composed Archiving: Ensures that archived data is never unencrypted at rest or in transit, so providing the highest level of security. This enables customers to meet the most stringent of compliance and regulatory requirements, particularly for healthcare and finance customers. 

HD Recording on the Cambly app with OpenTokCambly, a tutoring app, uses Composed Archiving to record every learning session so students can capture and playback their conversations at any time, to really hone in and refine their fluency.

interested in learning more about Recording?

Contact Us

Interactive Broadcast

The Interactive Broadcast API allows customers to embed large-scale interactive video experiences into their branded websites and applications. Broadcasters can host their event for 3000 participants to interact simultaneously in real-time and stream to an unlimited number with HLS capabilities.

Additionally, broadcasters can also stream directly into any video platform, including Facebook Live, Twitch.tv, YouTube Live and more through RTMP streaming. HLS & RTMP streaming is now available in 720p HD providing high quality viewing experience for a variety of use-cases:

  • Webinar playbacks 
  • Interactive webinars
  • Online auction
  • Education
  • Social interactive broadcast 
  • Sports, Media & Entertainment
  • Interactive gaming

Crowdcast uses HD Recording and live streaming from the OpenTok platformCrowdcast makes it possible to live stream content into any video platform and broadcast to up to 3,000 participants in real time with Live Streaming and Interactive Broadcast from TokBox. 

“HD recordings and HD broadcasts are essential for delivering a great experience to our Crowdcast hosts and end users. These features paired with custom layout management allow us to deliver a top notch experience that maximizes screen real estate and results in all users consuming the best possible stream on all platforms, both in real-time and in the recording.”  – Dylan Jhaveri, CTO, Crowdcast

Interested in learning more about Live Streaming or Interactive Broadcast?

Contact Us

HD Recording and Live Streaming Available Today

With the introduction of support for HD Recording and Live Streaming, we are also making changes to our pricing to accommodate this new format. Please refer to our pricing page for more details. Additionally, API documentation for recording and live streaming has been updated with information on how to set the resolution of the recordings and live streaming.

If you’re interested in learning more about HD Recording, HD Live Streaming or other platform features, please contact us.

Resolution 101

HD Recording and live streaming resolutions

The resolutions you see— 480p, 720p —represent the number of horizontal lines a video has from top to bottom. So, a 480p video is made up of 480 lines stacked one on top of another, with each line being 640 pixels wide. 

True HD starts at 720p, where a 720p video has 720 lines that are each 1,280 pixels wide. It is more than three times as sharp as 480p. It can be viewed on a much larger screen resulting in a more crisp viewing experience.

The post Enhance Live Streaming and Recording Capabilities with HD appeared first on TokBox Blog.

Hacking Health Tech: TokBox at HackHLTH Las Vegas

$
0
0

Recently, my colleagues and I were in Las Vegas sponsoring HackHLTH at the inaugural HLTH conference on the future of healthcare  The hackathon was geared towards building health tech solutions that would improve the health industry and create a more robust ecosystem. In little more than 24 hours there were over 80 projects from over 250 participants.

Many of those projects took advantage of the OpenTok platform to include live video in their apps. It’s a testament to the great work of the TokBox engineering team that it’s so quick and easy to get live video up and running. I was proud to see teams build such a variety of solutions in such a short space of time.

Support groups growing over live video

Among a wide variety of doctor-patient applications, we saw several support group applications designed to help individuals. One such application, Mindfull, used the user’s activity from their FitBit and detected activity out of the ordinary. Based on an ML model, the application would actively seek to detect unusual activity. This then triggered an alert prompting the user to video chat with someone in their support group.

Similarly, another team created an iOS app, GLU, which connected individuals with their support groups over live video.

Innovating for health services

Other use cases included connecting patients with mental health experts over live video. One of these applications, Cork, created a scheduling app for appointments. This went together with an interactive questionnaire, including a PHQ-9 (a professional tool used to monitor the severity of depression).  Together, this app helps keep mental health experts engaged with their patients before their appointments.

We also saw an interesting use case which used live video along with AI to detect and diagnose bug bites. If the AI failed to classify the bug bite, the patient would then connect over live video with a nearby doctor. This is a great example where video is used as a next level service if immediate self-help options aren’t able to resolve a problem. 

A winning combination

Our $2000 hackathon prize winner, Team TeleCare, created a mobile application that connected doctors and patients while a third endpoint running our beta MacOS SDK evaluated the patient’s heart and breathing rate. They achieved this by running computer vision on the patient’s face and body in real-time – an AI vision approach that has recently been explored at MIT and other research labs. This background analysis can then be made available as augmented information for the doctor or health practitioner in real-time while they talk with the patient, without the need for patient-side instruments.

health tech hackathon hackhlth TokBox winners

Team TeleCare, shown above fourth from left to seventh from left along with our four TokBox staffers, were Mitchell Ang, Yonni Luu, Tiffany Wu, and Kelvin Chan, who met originally at The University of British Columbia.

We believe that this combination of live video and media processing will become more and more common within healthcare apps. As access to services which analyze things like speech, facial expression, behavior and vital signs improves, I’m looking forward to seeing these combinations become mainstream in health tech.

Changing the face of health tech

As well as these stand-out apps, I saw a lot of great ideas over the weekend. A lot of the teams at the hackathon worked towards solutions for important healthcare challenges. Decreasing doctor exhaustion, increasing doctor-patient conversations (for example by having interactive bots engage before the patient conversation), and limiting patient appointment cancellations were all a focus. As health tech develops and more applications like these are adopted, the whole patient journey will be completely transformed. 

All in all, it was a great weekend with participants creating an impressive range of projects. To support the hackathon participants, I hosted a workshop where I went over using the OpenTok API with Web, iOS, Android, Cordova, and even React Native. It was great to see participants go on to use these tools, especially developers who were new to OpenTok.

I’d love to meet you at a hackathon soon! Keep an eye out for announcements about the events we’ll be participating in (follow us on Twitter @tokbox). In the meantime, don’t forget to check out our growing collection of webinars where I dive in deep on specific live video development topics.

The post Hacking Health Tech: TokBox at HackHLTH Las Vegas appeared first on TokBox Blog.

Build an AR App with the Frame Metadata API

$
0
0

If you had to choose one memorable thing about the upcoming iOS 12 when it was unveiled at WWDC’18, it would probably be the inclusion of ARKit 2.0, Apple’s Augmented Reality toolkit for iOS. I bet you still remember that cool demo by the Lego guys playing on the stage.

In fact, ARKit is probably the component which is going to grow the most in the new version of iOS, with many new features, improvements and even a new app to easily perform real-world measurements.

At TokBox, we released OpenTok version 2.14 a few weeks ago. Included in that release is a new API that will help push ARKit a little bit further. I’m talking of course about the Frame Metadata API.

Here we’re going to look at this new API and how it can help to improve AR use cases. To demonstrate, we’ll see an example in which this new TokBox API plays a decisive role in empowering a complete solution. We’ll also see code so that you can build an AR app using this API.

An Introduction to the Frame Metadata API

We first introduced the API in a previous post, but let’s start with a quick introduction to this API. At the most straightforward level, it allows us to develop some use cases that weren’t previously possible using other methods.

The most important part of a video conference is the video itself. A video stream is composed of thousands of video frames that are constantly flowing from a publisher to some number of subscribers. The Frame Metadata API allows you to insert some (small) information in each video frame as metadata of that specific frame.

There is no other way to get better “real-time” information than this: the metadata forms part of the video frame itself, so the whole WebRTC engine will ensure this data and the video is fully synchronized as they are delivered together to enable better real-time communication.

Given that this information is sent along with every video frame packet, which we don’t want to make too large, the metadata size that is available is just 32 bytes. That’s not a large amount of space, but it’s big enough to be able to bundle things like timestamps, data hashes, histograms or positional and angular data from a phone’s sensors.

The good news is that this API is one of the simplest things to use in OpenTok. You just need to call setMetadata method on the OTVideoFrame instance. This API is available for iOS, Android and Windows SDKs. If you want more details, please visit the samples linked to in the 2.14 blog entry

Build an AR app using signals

Now that we know the tools, let’s introduce the idea that we want to explore in this post. The first idea was to create an application that would allow a remote participant (a subscriber) to set 3D annotations in the AR view of a video publisher.

One example application for this use-case could be in the insurance sector. An agent can set annotations in the “world” of the car-owner publisher who is streaming video of their car which has been involved in some kind of accident. Other kinds of remote expert support applications also fit this pattern.

The steps for our simple first attempt were as follows:

  1. The car owner is the video publisher
  2. Their app publishes from ARKit allowing annotations and other objects to be placed into the car owner’s view
  3. The agent is a subscriber to the car owner’s video stream and watches this
  4. The agent taps their screen in their app when they want to annotate or highlight something important
  5. The agent’s app then sends an OpenTok signal requesting that an annotation be placed in the car owner’s view at a particular screen location
  6. The car owner’s app receives the signal and uses ARKit to add the annotation. This is then seen by the agent within the video stream that the car owner is continuing to publish

However, the challenge with this approach is the timing between steps 4 and 6. By the time the car owner’s app places the annotation its camera view may have changed so that their screen is now different from the agent’s screen at the moment they tapped. This results in the annotation often being misplaced. This is a simple result of network delay and the fact that OpenTok signals are delivered in a different “channel” with no guaranteed synchronization with the video frames being sent.

Using Frame Metadata to create real-time annotations

Here is where Frame Metadata comes to the rescue. We can embed the right information about the publisher’s state in each frame. That means the subscriber has real-time position information from the publisher thanks to Frame Metadata. Then, when the remote participant wants to create an annotation, they can first accurately calculate the 3D location of the annotation on the remote end.  

The only piece that is missing in this puzzle is what is “the right information” that should be provided? We want to create a 3D object in a 3D position in the subscriber that is in front of the image that the publisher is seeing. To do that, we are going to send the 3D coordinates of the publisher’s ARKit camera in form of a transform matrix to the publisher. We will see how to embed the transform matrix of the ARKit camera of the publisher in the section below where we explore our sample app code.

The steps for our enhanced application are now as follows:

  1. The car owner is still the video publisher
  2. Their app still publishes from ARKit allowing annotations and other objects to be placed into the car owner’s view
  3. Now in addition, the car owner app adds the continuously changing 3D camera information to every frame it publishes using the Frame Metadata API
  4. The agent is still a subscriber to this video stream and watches this
  5. The agent taps their screen in their app when they want to annotate or highlight something important
  6. The agent app now takes the 3D metadata from the frame the agent is watching at the moment they tap and uses this to calculate the correct 3D position for the annotation within the car owner’s view
  7. The agent’s app then sends an OpenTok signal which now includes the correct 3D position for the annotation, which is independent of whether or not the car owner’s camera view changes
  8. The car owner’s app receives the signal and uses ARKit to add the annotation at the correct 3D position. This is then seen by the agent within the video stream that the car owner is continuing to publish.

With this approach any delay in receiving the OpenTok signal no longer has any impact. The agent app has fully synchronized camera data for every video frame and so can position the annotation exactly, even if it then takes a fraction of a second for the signal to arrive and the annotation to actually be created by the user app.

As always, we have created a sample app that puts everything described in this blog post into practice. If you want to see it in action, please get this sample app from GitHub and follow the discussion below.

AR App Architecture

The sample app is an iOS app and the two main elements we are going to use are ARKit and the OpenTok Frame Metadata API. The graphic below shows a simple diagram of how the application works by using elements from both SDKs.

Framemeta data AR app architecture

 

In the publisher we use a ARSCNView which is a SCNKit scene with AR capabilities powered by ARKit. That view will feed with the view of the back camera and the AR Scene to a custom capturer that our Publisher will use to send frames to the subscriber. The custom capturer will bundle in the frame the camera 3D position and rotation in the frame metadata and will send it to the subscriber using the OpenTok SDK.

On the subscriber side, the frame will be shown. When the subscriber taps the view to create an annotation, the view will capture the x and y position of the touch. Using the 3D camera position of the publisher for the current frame, which is bundled as metadata in each frame by the publisher, it can calculate the 3D position of the annotation. Once that position is calculated, the subscriber will send a signal to the publisher using the OpenTok SDK with the position of this annotation.

When the publisher receives the signal it adds the annotation to the AR world, so it can be seen both by the publisher and subscriber. Since the subscriber is sending a complete 3D position based on the frame metadata at the moment the screen was touched, it does not matter how the publisher’s video view may have changed since then (unlike our original simplistic “annotate now” signal approach).

Code walkthrough

The sample has two main ViewControllers, PublisherViewController and SubscriberViewController that control each role of the app.

PublisherViewController

The role of this view controller is to hold the AR Session and render the world using SCNKit.

For the Frame Metadata API part, we use a custom capturer similar to the custom video driver swift sample. The most important modifications of that sample is that we added the capability of capturing the SCNKit frame along with the camera input, and the add of a delegate that is called just before the frame is shipped to the underlying OpenTok SDK.

In the PublisherViewController class we implement the delegate and pack the camera information in the frame metadata.

Since the limitation of the metadata is 32 bytes, we pack the float numbers in a Data array by using this code:

extension PublisherViewController: SCNViewVideoCaptureDelegate {
    func prepare(videoFrame: OTVideoFrame) {
        let cameraNode = sceneView.scene.rootNode.childNodes.first {
            $0.camera != nil
        }
        if let node = cameraNode, let cam = node.camera {
            let data = Data(fromArray: [
                node.simdPosition.x,
                node.simdPosition.y,
                node.simdPosition.z,
                node.eulerAngles.x,
                node.eulerAngles.y,
                node.eulerAngles.z,
                Float(cam.zNear),
                Float(cam.fieldOfView)
                ])

            var err: OTError?
            videoFrame.setMetadata(data, error: &err)
            if let e = err {
                print("Error adding frame metadata: \(e.localizedDescription)")
            }
        }
    }
}

If you want to see how we convert an array of Float to data, please take a look at Data+fromArray.swift file from the sample.

In this class we have the code to add elements when the subscriber signals it.

The content of the signal from the subscriber will be:

AnnotationX:AnnotationY:AnnotationZ:TapX:TapY

With that information, we use two utilities methods from SCNKit: projectPoint and unprojectPoint to transform the 2D position of the tap and the object 3D position to calculate the final position of the annotation.

let nodePos = signal.split(separator: ":")
if  nodePos.count == 5,
    let newNodeX = Float(nodePos[0]),
    let newNodeY = Float(nodePos[1]),
    let newNodeZ = Float(nodePos[2]),
    let x = Float(nodePos[3]),
    let y = Float(nodePos[4])
{
    newNode.simdPosition.x = newNodeX
    newNode.simdPosition.y = newNodeY
    newNode.simdPosition.z = newNodeZ
    let z = sceneView.projectPoint(newNode.position).z
    let p = sceneView.unprojectPoint(SCNVector3(x, y, z))
    newNode.position = p

    sceneView.scene.rootNode.addChildNode(newNode)
}

SubscriberViewController

The main role of this class is to get the frame from the publisher, render it and when a tap is made, signal the publisher with the position of the annotation. For the rendering we use the same renderer as the custom video driver sample, but with the addition of delegation capability so it can expose the frame metadata through it.

The delegate saves the publisher scene camera with this code:

guard let metadata = videoFrame.metadata else {
    return
}

let arr = metadata.toArray(type: Float.self)
let cameraNode = SCNNode()
cameraNode.simdPosition.x = arr[0]
cameraNode.simdPosition.y = arr[1]
cameraNode.simdPosition.z = arr[2]

cameraNode.eulerAngles.x = arr[3]
cameraNode.eulerAngles.y = arr[4]
cameraNode.eulerAngles.z = arr[5]

cameraNode.camera = SCNCamera()
cameraNode.camera?.zFar = CAMERA_DEFAULT_ZFAR
cameraNode.camera?.zNear = Double(arr[6])
cameraNode.camera?.fieldOfView = CGFloat(arr[7])

self.lastNode = cameraNode

Please take a look at the section above where the Publisher bundled this information in the frame.

When the view is tapped, we calculate the annotation position with this code:

guard let lastCamera = lastNode else {
    return
}
let loc = recoginizer.location(in: view)
let nodePos = lastCamera.simdWorldFront * FIXED_DEPTH

otSession.signal(withType: "newNode", string: 
"\(nodePos.x):\(nodePos.y):\(nodePos.z):\(loc.x):\(loc.y)", connection: nil, error: nil)

So there you have the main elements of this app. See the full GitHub code for how this all fits together into a complete sample app.

Conclusion and Further References

We have seen in this blog how our new Frame Metadata API can be used to build an AR app with features like real-time annotations. This is achieved by allowing fully-synchronized data to be sent with every video frame. In the simple example above we just had one stream, with the publisher giving the remote subscriber the right 3D perspective to calculate the placement of annotations. There are plenty more complex AR scenarios possible with multiple streams in various directions with multiple participants.

In addition, this metadata API can be used in other scenarios where it is very important to have synchronized real-time information, such as within computer vision applications. For example, our CTO discussed video quality improvement using face detection followed by image transformation in his presentation at Kranky Geek last year. In that example, fully-synchronized metadata is used to ensure that varying image transformations by the publisher are correctly reversed by each subscriber.

If you want to see other uses of ARKit and Opentok, please take a look at our sample code on GitHub:

If you build an AR app using OpenTok and any of these samples, we’d love to hear from you!

The post Build an AR App with the Frame Metadata API appeared first on TokBox Blog.

Enter our AR and Live Video Summer Virtual Hackathon Now!

$
0
0

We’d like to welcome all WebRTC and TokBox API developers, wherever you are in the world, to our AR and Live Video Summer Virtual Hackathon! How would you like the chance of winning the $600 prize for the top team, and $300 for the runners up? It’s up for grabs for the teams which create the best apps combining Augmented Reality and Live Video. Read on to find out how to get involved.  The app will be developed in your own time, with a week to get setup and then four weeks of coding in teams of 1 to 3 people – with your app submitted online by August 20th.

An exciting and growing space

Augmented Reality is an exciting and rapidly growing technology space. A recent BI Intelligence report showed the overall AR/VR market growing at an annualized rate of 113% to reach $215 billion in 2021, up from $11 billion in 2017. Today we are seeing significant investments by tech giants such as Apple, Google, Microsoft and Facebook along with Industry app growth in markets including retail, manufacturing and, of course, gaming.

New AR toolkits from Apple, Google and others are making it ever easier to develop Augmented Reality applications on mobile devices. And these AR apps are also increasingly using WebRTC-based Live Video to stream these experiences between remote participants, for example for remote expert help, decision support and training. To be effective in expanding business and social use cases, AR experiences need to be increasingly shared across participants – an area where your existing WebRTC interests and skills will shine, while you increase your development work with Augmented Reality.

Example AR pioneers already using TokBox for Live Video include DAQRI (see our recent blog), who recently spoke at an exciting AR Meetup in San Francisco (video of their talk will be released soon), HelpLightning and others.

And at TokBox, we have been demonstrating example applications that combine Apple’s ARkit with OpenTok APIs, as described in recent blogs and webinars.

Get involved with our AR and Live Video hackathon

So TokBox is holding a fun summer virtual hackathon to build apps that combine your selected AR toolkit, such as Apple ARKit or Google ARCore, with our WebRTC-based OpenTok SDKs – most probably for iOS or Android, but you can choose any platform.

Submissions will be judged based on how AR and Live Video capabilities are combined within intriguing use cases that are business or socially relevant. With bonus points for multi-participant use-cases and effective user experience design.

Please see our Hackathon.io overview of the rules and prizes, and simply sign up there to join the virtual hackathon and add your team members. Teams can be up to 3 people total, unless you plan to go solo, which is fine! There are get-started instructions for OpenTok APIs and links to GitHub sample code on our hackathon developer page.

You must also join the HackTokBox Slack workspace to get access to promo codes, ask us questions, get up-to-date information, and talk with other participants. We look forward to seeing you in the virtual world during these summer months!

Sign up to Hack Now

The post Enter our AR and Live Video Summer Virtual Hackathon Now! appeared first on TokBox Blog.

Speed Up OpenTok Session Debugging with Inspector

$
0
0

Diving into data on OpenTok sessions can provide developers with the tools they need to understand bugs and make improvements quickly. That’s why we built Inspector – a tool to help developers understand what happened in specific OpenTok sessions. Users can either enter a session ID or use the Session Dashboard to access the information. This includes user data, errors, video and audio quality and events about a session.

Since first launching Inspector a few years ago, we’ve received feedback from users on some improvements they would like to see in order to make more informed decisions on sessions and debug more efficiently. So we’ve been working on these improvements over the past few months. Today, we’re excited to make them available to all users.

Have a better understanding of your users

One of the first areas we wanted to focus on was user insights. We recognized the need to provide developers with more data on who their end users are. This would let developers match the user IDs in Inspector to the real-life participant who experienced issues. So we’ve added end user data to the Inspector dashboard, including location, device and time connected. This allows developers to get an at-a-glance overview of their users and more quickly understand and identify problems.

OpenTok Session debugging filters for User data

We’ve also provided an additional layer of user data to give developers insights around connection IDs, subscribers, publishers, and end reason, in order to quickly identify issues within the session.

OpenTok Session Debugging User data view

Get a high level view of session errors

We’re also now providing developers with a high-level view of errors across sessions before diving into events or quality data. This saves valuable time in the OpenTok session debugging process.

OpenTok Inspector Error log view

If any errors occurred in the session, they will display in the Error Log section. This section also displays the failure rate for connection, publish, and subscribe attempts in the session or meeting. This rate is calculated by dividing the number of failures by the total number of attempts.

Filter data and quickly drill down on what matters

Finally, we wanted to make it easier for our developers to drill down and filter data across data types. This lets them focus more quickly focus on what matters, saving hours of time. We especially recognized the need for this feature as we were continuing to add more data to Inspector, and we needed to give users an easy way to filter this data.

inspector tool for OpenTok session debugging Meeting statistics view

Now, developers can filter data by users, allowing developers to easily identify users and filter them from anywhere in the app. We’ve also made improvements to the user navigation. This lets developers easily navigate between different sections of Inspector, including Summary, Errors, Quality Metrics and Events.

With these improvements, we aim to provide a more efficient experience for developers and allow them to pinpoint session-related data more quickly. This will make OpenTok session debugging a much easier process. We welcome developer feedback on these updates, as well as other additions you would like to see in Inspector, or any of our other developer tools

The post Speed Up OpenTok Session Debugging with Inspector appeared first on TokBox Blog.

Update to Chrome Screen Sharing

$
0
0

As you may know, Google recently announced the deprecation of inline installation of Chrome extensions in an effort to improve transparency and security. For end-users, this means that the process for installing screen sharing extensions will change from inline installation to installation from the Chrome Web Store. Below, you’ll find information on the deprecation timeline provided by the Chromium team:

  • September 12th – inline installation disabled
  • December 2018 – inline installation API method removed from Chrome 71  

What’s happening?

Starting September 12th, users installing Chrome extensions will be automatically redirected to the Chrome Web Store to complete the extension installation process. To prevent this from creating user experience issues for applications with screen sharing, the WebRTC team is working with the Chromium team to add getDisplayMedia, a screen sharing API, into Chrome. Mozilla Firefox and Microsoft Edge currently support the getDisplayMedia API and it’s in development in Safari as well, so it will be great to see Chrome adopt the same API for screen sharing.

What is TokBox doing?

We are watching the Google Chrome ticket “Ship screen capture for WebRTC for the web,” regarding the getDisplayMedia functionality very closely and plan on implementing the feature in the next release of our JavaScript SDK to leverage the change. Once this API is implemented, users that have Chrome with the getDisplayMedia API will be able to screen share without having to install any extensions. We’ll also add sample code and documentation to reflect the changes.

How can I stay informed?

To follow our ongoing updates, please subscribe to this FAQ article here. We’ll continue to update this article once we have additional information.

What actions can you take?

You should consider upgrading your web application to make the Chrome web store installation redirect as seamless as possible. As per your usual plan of upgrading to the latest OpenTok JS SDK, you may also want to inform your end users that they will be redirected to the Chrome Web Store when they try to screen share.

If you have any questions, please feel free to reach out to our team at support@tokbox.com, or check out our FAQ.

The post Update to Chrome Screen Sharing appeared first on TokBox Blog.


How To Get the Most out of the New Account Dashboard

$
0
0

In April, we announced updates to our pricing and packaging, which allows TokBox customers to have more flexibility with the ability to add-on features a la carte to their accounts. With this in mind, we’ve been hard at work over the past couple of months to make our entire user experience more flexible and allow users to have more control in configuring their OpenTok projects.

Today, we’re excited to announce that we’ve made several improvements to the Account Dashboard, which provides users with more self-service functionality. These updates will make it faster for users to configure their OpenTok projects and make quick changes, and to gather critical information on the add-ons they have access to.

Here’s a summary of the changes we’ve made:

Self Serve Project Management

Many of our customers have selected advanced features, such as AES-256, China Relay, and Regional Media Zones to add-on to their accounts. To make it easier to configure these features for specific projects and enable / disable them, we’ve added the ability for users to make these project-specific changes right from the Account Dashboard. This alleviates the extra step of having to go through our support team to get them setup with a feature, and the flexibility of enabling add-ons for specific projects.

Video Codec Selection

The OpenTok platform supports two popular video codecs – VP8 and the H.264. A video codec has two parts, an encoder and a decoder. It has the ability to encode (compress) incoming digital video frames from a webcam into a stream of binary data that can be sent over a network. It also has the ability to ingest a stream of binary data and decode (decompress) it into a flow of raw video frames that can be displayed on a screen.

The VP8 real-time video codec is a software codec. It can work well at lower bitrates and is a mature video codec in the context of WebRTC. The VP8 codec supports the OpenTok Scalable Video feature, which means it works well in large sessions with supported browsers and devices.

The H.264 real-time video codec is available in both hardware and software forms depending on the device. It is a relatively new codec in the context of WebRTC although it has a long history for streaming movies and video clips over the internet. Hardware codec support means that the core CPU of the device doesn’t have to work as hard to process the video, resulting in reduced CPU load. The number of hardware instances is device-dependent with iOS having the best support.

Across the ecosystem of devices and browsers that OpenTok supports, there are varying levels of support for the VP8 and H.264 real-time video codecs. Some endpoints support both video codecs, and some just support one video codec (ie Safari only supports H.264). Depending on the type of application you’re building and the types of browsers and devices your end users will use, your choice of preferred codec will change.

With this in mind, we now support the ability for users to select which codec to assign as their preferred codec for a particular OpenTok Project. Depending on the end-points that customers want to support the preferred codec can be modified accordingly.

Environment Selection

Our platform supports two separate server environments, the Standard Environment and the Enterprise Environment. The Standard Environment provides early access to new features and a more frequent release cadence. The Enterprise environment offers a separate cloud environment, SLAs, and a predictable release cadence, and is accessible to customers with the Enterprise plan only.

We now support the ability for customers to configure which environment they want to configure for their specific OpenTok projects, providing additional flexibility. This allows Enterprise customers, for example, to have some projects on the Enterprise environment and move some projects to the Standard environment, giving them the ability to test and access to features early.

We’ll continue to listen to customer feedback and make enhancements to our Account Dashboard to provide an elevated user experience. If you have any questions, please don’t hesitate to reach out.

The post How To Get the Most out of the New Account Dashboard appeared first on TokBox Blog.

Integrating WebRTC with PSTN using OpenTok & Nexmo SIP

$
0
0

In this blog we look at how to connect OpenTok Live Video sessions with traditional PSTN phone calls. We will demonstrate how to connect an OpenTok session to PSTN with an audio stream that connects through OpenTok SIP Interconnect to a Nexmo SIP-PSTN Gateway.

OpenTok SIP Interconnect is a general purpose SIP capability that can be used to connect to many different kinds of gateway or other SIP systems. TokBox is now part of Vonage, so in this blog we will use our own Nexmo programmable communications APIs to bridge the call.

We’ve created a sample application on GitHub in the opentok-sip-samples repo. The Nexmo SIP dial out sample application leverages the OpenTok Node Server SDK which allows you to create sessions, generate tokens, dial out to SIP endpoints, force disconnect clients, and much more. In this application, we will also be using Express, a Node.js framework, to create our own app server along with JavaScript on the client side for the web app.

SIP Integration Overview

We first use client-side JavaScript to communicate with the app server through HTTP requests to get session credentials. In this example we are using JavaScript – in a native iOS or Android client app you may do this in another language. The sessionId is fetched from OpenTok and the token is then generated on the app server. We then use those credentials on the client side to initialize and connect to the an OpenTok Session. After connecting successfully, we create and publish the stream. We also set event listeners so we can subscribe to any stream that is created. After publishing the WebRTC audio and/or video stream, we can then dial out to a SIP URI using OpenTok SIP Interconnect. OpenTok takes care of the dial out and forwards the stream to Nexmo which then dials out to the PSTN user.

Sample Code

Let’s take a dive into the code so that you can also build this application.

Before you start, please make sure you have these installed on your machine:

To get started, clone the the opentok-sip-samples repo and change directory to Nexmo-SIP-Dial-Out.

As you can see below, in the opentok.js file, located in the js folder, we initialize a session by calling the initSession method on the OT object. We then set event listeners on the session object for streamCreated and streamDestroyed where we subscribe to a stream when it’s created and print a message when it’s destroyed. After setting the event listeners, we connect to the session by passing in the token and an error handler to make sure there weren’t any errors. If there is no error, we proceed to creating a publisher and publishing.

const session = OT.initSession(apiKey, sessionId);

session.on({
  streamCreated: (event) => {
     const subscriberClassName = `subscriber-${event.stream.streamId}`;
     const subscriber = document.createElement('div');
     subscriber.setAttribute('id', subscriberClassName);
     document.getElementById('subscribers').appendChild(subscriber);
     session.subscribe(event.stream, subscriberClassName);
   },
  streamDestroyed: (event) => {
     console.log(`Stream ${event.stream.name} ended because ${event.reason}.`);
   },
});

session.connect(token, (error) => {
  if (error) {
     console.log('error connecting to session');
   } else {
     const publisher = OT.initPublisher('publisher');
     session.publish(publisher);
   }
});

The index.ejs file in the views folder is simply creating a few buttons which either make a fetch request to the app server to dial out or hang up. You can find more details on this here.

Let’s move on to the app server where we will create a few endpoints to render our index.ejs page, dial out to the SIP URI, and hang up.

First off, we import all of the dependencies that you need: express, body-parser, opentok, etc.

Then we will create a /room/:rid endpoint which will dynamically create sessions and tokens based on the rid parameter:

app.get('/room/:rid', (req, res) => {
  const roomId = req.params.rid;
  if (app.get(roomId)) {
     const sessionId = app.get(roomId);
     const token = generateToken(sessionId);
     renderRoom(res, sessionId, token, roomId);
   } else {
     setSessionDataAndRenderRoom(res, roomId);
   }
});

As you can see above, we either render the index.ejs with the existing sessionId in memory or create a session and then render the index.ejs page.

After this, we will create the dial-out endpoint which will allow us to make the dial out call to OpenTok:

app.get('/dial-out', (req, res) => {
  const { roomId, phoneNumber } = req.query;
  const sipTokenData = `{"sip":true, "role":"client", "name":"'${phoneNumber}'"}`;
  const sessionId = app.get(roomId);
  const token = generateToken(sessionId, sipTokenData);
  const options = setSipOptions();
  const sipUri = `sip:${phoneNumber}@sip.nexmo.com`;
  OT.dial(sessionId, token, sipUri, options, (error, sipCall) => {
     if (error) {
       res.status(400).send('There was an error dialing out');
     } else {
       app.set(phoneNumber, sipCall.connectionId);
       res.json(sipCall);
     }
   });
});

Here we generate the SIP URI based on the phone number that we receive from the client side application and also add the phone number as a part of the token data. We also use the options parameter to set the SIP options which include the Nexmo API Key and API Secret set as the username and password, respectively. These are crucial for authenticating for Nexmo’s APIs. If the dial out is successful, we can then set the phone number and SIP connectionId in memory.  Please keep in mind that we’re using memory because it’s a sample application. In production, you should use a database for the mapping.

Finally, let’s create an endpoint for hanging up the call to the PSTN user:

app.get('/hang-up', (req, res) => {
  const { roomId, phoneNumber } = req.query;
  const connectionId = app.get(phoneNumber);
  if (app.get(roomId)) {
    const sessionId = app.get(roomId);
    OT.forceDisconnect(sessionId, connectionId, (error) => {
       if (error) {
         res.status(400).send('There was an error hanging up');
       } else {
         res.status(200).send('Ok');
       }
     });
   } else {
     res.status(400).send('There was an error hanging up');
   }
});

The hang-up endpoint invokes the forceDisconnect method on the OpenTok Node SDK and passes in the sessionId and the connectionId of the PSTN user. This action then disconnects the PSTN user from the OpenTok session.

Conclusion – OpenTok and PSTN Connected

In this blog, we’ve covered the important concepts required to connect an OpenTok session through a SIP gateway to a PSTN user. To see the full code please refer to the opentok-sip-samples repo, and also sign up for my upcoming webinar on SIP Interconnect and Nexmo on September 26th to learn more.

The post Integrating WebRTC with PSTN using OpenTok & Nexmo SIP appeared first on TokBox Blog.

OpenTok Server SDKs – What’s new?

$
0
0

 In an effort to make our APIs more accessible and easy to use, we’ve spent some time improving all six of our server SDKs. The OpenTok Server SDKs provide a convenient way to to interact with the OpenTok REST API in a variety of languages. Please note that these enhancements will not break your implementation should you choose to upgrade.

Before these recent changes, you would have to use the OpenTok Server SDK along with the REST API to accomplish some tasks. For example, if you wanted to dial out to a SIP client, you would have to use a server SDK to generate a token, but you would have to use the REST API to make the dial call. We’ve improved the workflow for this so you only need to use a server SDK to interact with the OpenTok REST APIs. We’ve also added the ability to archive in HD and have added support for broadcasting features so you can control your OpenTok application through one library.

For more information on the release notes, please check out the repos below:

As we continue to evolve and add more features, we will continue to add support for REST API through the server SDKs. Considering that these server SDKs are open source projects, I encourage everyone to participate and help improve them.

If you’d like to see us add support for other languages, please reach out us at sdk-beta@tokbox.com

 

The post OpenTok Server SDKs – What’s new? appeared first on TokBox Blog.

OpenTok version 2.15: What’s new and how you can use it

$
0
0

Last week, we released OpenTok v2.15, the latest version of our Client SDKs. We wanted to update you on some of the great new features included and how you can use them.

Audio Enhancements in Web and Windows SDKs

In opentok.js 2.14 we added the ability to switch cameras using the Publisher cycleVideo() method which was really well received. Version 2.15.0 of opentok.js and our Windows SDK add the ability to switch to a different audio source. In opentok.js you do with using the Publisher setAudioSource() method and in Windows you use the AudioDevice.SetInputAudioDevice method. The obvious use-case for this API is to allow your users to switch microphones without needing to create a whole new publisher. But it can also be used to switch to other supported audio sources, for example loading audio from an audio file, or creating custom audio. In opentok.js you do this using the setAudioSource() method and passing a custom audioTrack. In the Windows SDK you do this using the AudioDevice.setCustomAudioDevice method and passing a custom audio driver.

To get a better idea how to use cycleVideo(), setAudioSource() and the AudioLevelUpdatedEvent in your web application using opentok.js have a look at the new Publish Devices sample application

We also brought some other new audio features to our Windows SDK with v2.15.0 to bring it inline with our other SDKs. The Windows SDK now supports stereo audio through the stereo argument to the Publisher constructor. We have also added the AudioLevel event to the Publisher and Subscriber which lets you know the amount of audio activity.

Handling Multiple Video Codecs in Web and Android SDKs

With opentok.js v2.12 we added support for Safari and along with it support for the H.264 codec. When working with H.264 there are some cases where it is not going to work and as a developer you would like to know this in advance so that you can tell your customers to use a different browser or device. With 2.15.0 we have solved this with the getSupportedCodecs API. This API is available in opentok.js and in our Android SDK where there are cases that H.264 or VP8 might not be supported.

Publisher Stats on iOS, Android and Windows

In opentok.js v2.13 we added the Publisher Stats API. With v2.15 we have brought this API to our iOS, Android and Windows SDKs as well.

This API gives you information about network statistics for audio and video such as packets lost and packets received. It allows you as a developer to get an insight into the quality of the end users network connection so that you can provide them feedback. We already had a similar API available on the Subscriber side but the addition of this API on the Publisher side completes the picture so that you know what the upstream network looks like.

For more information have a look at our documentation for the feature:

Share your screen in Chrome without an extension

Chrome 70 allows screen sharing without an extension, using the MediaStream.getDisplayMedia() method. We are really excited about this feature. This is a huge step towards making screen-sharing even easier, both to implement and to use. This feature is still hidden behind a flag in Chrome so to try it out you will need to go to chrome://flags in Chrome 70+ and select the “Enable Experimental Web Platform Features” flag. We anticipate that this feature will be enabled by default in future stable versions of Chrome (and Opera).

Vulnerability in the Plugin for Internet Explorer

In opentok.js version 2.15.0 we fixed a vulnerability in the plugin for Internet Explorer. We recommend that all of our customers update to this latest version of opentok.js to fix this issue.

Other bug fixes and performance improvements

  • x86_64 architecture – Our Android SDK now supports the x86_64 architecture so that it can support wearables like the Vuzix M300.
  • IP Whitelist Flag – Enterprise partners that have IP white listing enabled for an OpenTok project should now set the new ipWhitelist parameter of the Session. This lets us know to make sure we are loading only from servers within that whitelist range. We have added this setting to all of our Client SDKs.
  • PreferredFramerate and PreferredResolution – We have brought our PreferredFramerate and PreferredResolution settings to the Windows SDK. The Subscriber.PreferredFramerate and Subscriber.PreferredResolution properties let you set the preferred frame rate and resolution for a subscriber’s stream. This setting only applies to subscribers in a routed session.

For a full list of the features and fixes in our client SDKs have a look at the release notes.

The post OpenTok version 2.15: What’s new and how you can use it appeared first on TokBox Blog.

Insights, now with GraphQL

$
0
0

A few months back, TokBox announced its Insights Dashboard, a view in the Account Portal for customers to better understand their applications’ video data. At the same time, we opened up an API (in private beta) to programmatically access this data in RESTful fashion along with summaries of individual sessions.

Today we’re pushing a new way to access this data as a public beta using GraphQL. GraphQL is an alternative to the typical REST approach of accessing data over HTTP. It was developed by Facebook in 2012, and open sourced in 2015.

Note that this post is discussing our Insights APIs that obtain video metadata. At this point, none of TokBox’s core video APIs are moving to GraphQL. With that out of the way, let’s get started.

Typically, REST APIs have different URL endpoints requiring an HTTP request per resource. For each request, there is a response with a fixed and predefined object returned. GraphQL provides a schema of the data and a single endpoint that gives clients the power to ask for exactly what schema fields they need and nothing more. Requests are always sent as an http POST with the fields to be returned specified in the body. The resulting response only contains values corresponding to those fields. The benefit of this is that there are fewer requests made, and only necessary information is transmitted over the wire.

Let’s use a scenario to illustrate this more clearly. Suppose that your team has built an e-learning application in which the instructor shares their screen, but students only publish video from their camera. You, as the application developer, want to create a pie chart showing which browsers are being used to screen share.

In a typical REST API, your requests would look something like this:

  1. List the connections in a given session.
  2. Get an object for each of the returned connection IDs, collecting browser information as you go.
  3. List all the streams in a session.
  4. Get an object for each of the returned Stream IDs, collecting which stream was from a screen share, along with its Connection ID as you go.

Now you can map connection IDs that had a screen share (from the Stream object) to the connection IDs in the Connection object which contains the browser used. Filter for extraneous data, plot your results, and you’re done.

With GraphQL, you construct your query using the GraphiQL Explorer tool and make a single API call:

  1. List connections in a given session, include the connections field. Within the connections’ resource field, include the browser and publishers fields. Within the publishers’ resource field, include the videoType field.

Your request body will look something like this:

query {
  project(projectId: 12345678) {
    sessionData{
      session(sessionId:"your_session_id"){
        meetings{
          resources{
            connections{
              resources{
                browser
                publishers{
                  resources{
                    stream{
                      videoType
                    }
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

And the response will match this format:

...
...
"connections": {
  "resources": [
    {
      "browser": "Chrome",
      "publishers": {
        "resources": [
          {
            "stream": {
              "videoType": "camera"
            }
          },
          {
            "stream": {
              "videoType": "screen"
            }
          }
        ]
      }
    },
    {
      "browser": "Chrome",
      "publishers": {
        "resources": [
          {
            "stream": {
              "videoType": "camera"
            }
          }
        ]
      }
    }
  ]
}

As you can see, each object returns only the information requested. The connection lists its browsers and publishers, and each publisher lists their videoType – just like we asked. You can take this single response and create your chart.

For more details on how to get started, check out our developer center guide and the GraphiQL explorer.

One final disclaimer: During the beta period this API will be free to use. Once we roll it out into general availability, we will introduce pricing for this API. For now our goal is to add more fields to the API and see how customers are using it. Please don’t hesitate to reach out if you have any questions. We look forward to seeing what you build.

The post Insights, now with GraphQL appeared first on TokBox Blog.

Viewing all 181 articles
Browse latest View live