Wednesday 5 July 2017

Web Real-Time Communication


by Anjali Pawar,


WEBRTC



WebRTC stands for web real-time communication this is a free and modern web application, this application is a collection of communications protocols and application programming interfaces that is use for real-time communication over (p-to-p) peer-to-peer connections via simple APIs.

The WebRTC components have been optimized to best serve this purpose. This applications use for video conferencing, file transfer one to another, chat, or desktop sharing without using any internal or external plugins.

WebRTC is being standardized by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF). OpenWebRTC provides another free implementation based on the multimedia framework GStreamer. WebRTC utilizes Real-time Transport Protocol for transfer of audio and video.


WebRTC is supported to some browser like Google chrome, Opera, Firefox, Microsoft edge, Android, chrome OS, Firefox OS, Blackberry.also there are some video stroming software support to WebRTC functionality like Flussonic Media Server and Wowza Streaming Engine.


WebRTC Components
Following are 4 main components in WebRTC

· GetUserMedia: It helps or allows a web browser to access the camera and microphone helping to capture media such as video and audio.

· RTCPeerConnection: This function helps in connecting audio and video calls.

· RTCDataChannel: This function helps browser for sharing the screen components..


The WebRTC API also includes a function:

· getStats: This function allows the web application to fetch a set of information about WebRTC sessions. This informative data are being explained in a separate W3C document.


The MediaStream API was developed for easier access to media streams from local cameras and microphones. The getUserMedia() method is the primary way to fetch local input devices.


The API has a few key features −
· A real-time media streaming is represented by a stream object in the form of video or audio

· It provides a high security level through user rights asking the user before a web application can start fetching a stream

· The input devices are chosen and handled by the MediaStream API (for example, when there are two cameras or microphones connected to the device)

Each MediaStream object includes different MediaStreamTrack objects. They provide video and audio from different input devices.

Each MediaStreamTrack object may contain many channels (right and left audio channels). These are the smallest parts defined by the MediaStream API.


There are two ways to output MediaStream objects. First, we can represent output into a video or audio element. Secondly, we can render output to the RTCPeerConnection object, which then shown to a remote peer.

1 comment: