DEV Community

Kishan Nakrani for Video SDK

Posted on • Originally published at videosdk.live on

Integrate RTMP Live Stream in React Native Video Call App

Image description

πŸ“Œ Introduction

Integrating RTMP (Real-Time Messaging Protocol) into your React Native video call app enables seamless live streaming functionality. With this feature, users can broadcast their video calls to a wider audience in real time. Implementing RTMP in React Native is achievable through utilizing the appropriate SDKs and APIs. RTMP integration adds versatility and dynamism to your React Native video call app, enriching user engagement and satisfaction.

Benefits of integrating RTMP feature into a React Native video call app:

  1. Real-Time Broadcasting : RTMP enables real-time streaming of video and audio content, ensuring instant communication with minimal delay.
  2. Scalability : RTMP supports scalable streaming, allowing your app to handle a large number of concurrent viewers without compromising performance.
  3. High-Quality Streaming : RTMP offers high-quality video and audio streaming, providing users with a superior viewing experience.

Benefits of integrating RTMP feature into a React Native video call app:

  1. Virtual Events : Host virtual conferences, webinars, and live seminars, allowing participants to join remotely and engage in real time.
  2. Live Tutorials : Conduct live tutorials and workshops, enabling instructors to interact with students in real time, answer questions, and provide feedback.
  3. Remote Collaboration : Facilitate remote team meetings, enabling teams to collaborate effectively regardless of their geographical locations.

By following the provided guide, you can incorporate RTMP live streaming effortlessly, enhancing the capabilities of your app.

How to build a React Native Video Call App with RTMP & VideoSDK

πŸš€ Getting Started with VideoSDK

To take advantage of the chat functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites.

Create a VideoSDK Account

Go to your VideoSDK dashboard and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality.

Generate your Auth Token

Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features.

For a more visual understanding of the account creation and token generation process, consider referring to the provided tutorial.

Prerequisites and Setup

Make sure your development environment meets the following requirements:

  • Node.js v12+
  • NPM v6+ (comes installed with newer Node versions)
  • Android Studio or Xcode installed

πŸ› οΈ Install VideoSDK Config.

It is necessary to set up VideoSDK within your project before going into the details of integrating the Image Capture feature. Installing VideoSDK using NPM or Yarn will depend on the needs of your project.

  • For NPM
npm install "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"
Enter fullscreen mode Exit fullscreen mode
  • For Yarn
yarn add "@videosdk.live/react-native-sdk" "@videosdk.live/react-native-incallmanager"
Enter fullscreen mode Exit fullscreen mode

Project Configuration

Before integrating the Image Capture functionality, ensure that your project is correctly prepared to handle the integration. This setup consists of a sequence of steps for configuring rights, dependencies, and platform-specific parameters so that VideoSDK can function seamlessly inside your application context.

Android Setup

  • Add the required permissions to the AndroidManifest.xml file.
<manifest
  xmlns:android="http://schemas.android.com/apk/res/android"
  package="com.cool.app"
>
    <!-- Give all the required permissions to app -->
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    <!-- Needed to communicate with already-paired Bluetooth devices. (Legacy up to Android 11) -->
    <uses-permission
        android:name="android.permission.BLUETOOTH"
        android:maxSdkVersion="30" />
    <uses-permission
        android:name="android.permission.BLUETOOTH_ADMIN"
        android:maxSdkVersion="30" />

    <!-- Needed to communicate with already-paired Bluetooth devices. (Android 12 upwards)-->
    <uses-permission android:name="android.permission.BLUETOOTH_CONNECT" />

    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW" />
    <uses-permission android:name="android.permission.FOREGROUND_SERVICE"/>
    <uses-permission android:name="android.permission.WAKE_LOCK" />

    <application>
   <meta-data
      android:name="live.videosdk.rnfgservice.notification_channel_name"
      android:value="Meeting Notification"
     />
    <meta-data
    android:name="live.videosdk.rnfgservice.notification_channel_description"
    android:value="Whenever meeting started notification will appear."
    />
    <meta-data
    android:name="live.videosdk.rnfgservice.notification_color"
    android:resource="@color/red"
    />
    <service android:name="live.videosdk.rnfgservice.ForegroundService" android:foregroundServiceType="mediaProjection"></service>
    <service android:name="live.videosdk.rnfgservice.ForegroundServiceTask"></service>
  </application>
</manifest>
Enter fullscreen mode Exit fullscreen mode

AndroidManifest.xml

  • Update your colors.xml file for internal dependencies.
<resources>
  <item name="red" type="color">
    #FC0303
  </item>
  <integer-array name="androidcolors">
    <item>@color/red</item>
  </integer-array>
</resources>
Enter fullscreen mode Exit fullscreen mode

android/app/src/main/res/values/colors.xml

  • Link the necessary VideoSDK dependencies.
  dependencies {
   implementation project(':rnwebrtc')
   implementation project(':rnfgservice')
  }
Enter fullscreen mode Exit fullscreen mode

android/app/build.gradle

include ':rnwebrtc'
project(':rnwebrtc').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-webrtc/android')

include ':rnfgservice'
project(':rnfgservice').projectDir = new File(rootProject.projectDir, '../node_modules/@videosdk.live/react-native-foreground-service/android')
Enter fullscreen mode Exit fullscreen mode

android/settings.gradle

import live.videosdk.rnwebrtc.WebRTCModulePackage;
import live.videosdk.rnfgservice.ForegroundServicePackage;

public class MainApplication extends Application implements ReactApplication {
  private static List<ReactPackage> getPackages() {
      @SuppressWarnings("UnnecessaryLocalVariable")
      List<ReactPackage> packages = new PackageList(this).getPackages();
      // Packages that cannot be autolinked yet can be added manually here, for example:

      packages.add(new ForegroundServicePackage());
      packages.add(new WebRTCModulePackage());

      return packages;
  }
}
Enter fullscreen mode Exit fullscreen mode

MainApplication.java

/* This one fixes a weird WebRTC runtime problem on some devices. */
android.enableDexingArtifactTransform.desugaring=false
Enter fullscreen mode Exit fullscreen mode

android/gradle.properties

  • Include the following line in your proguard-rules.pro file (optional: if you are using Proguard)
-keep class org.webrtc.** { *; }
Enter fullscreen mode Exit fullscreen mode

android/app/proguard-rules.pro

  • In your build.gradle file, update the minimum OS/SDK version to 23.
buildscript {
  ext {
      minSdkVersion = 23
  }
}
Enter fullscreen mode Exit fullscreen mode

build.gradle

iOS Setup​

IMPORTANT: Ensure that you are using CocoaPods version 1.10 or later.

  1. To update CocoaPods, you can reinstall the gem using the following command:
$ sudo gem install cocoapods
Enter fullscreen mode Exit fullscreen mode

2. Manually link react-native-incall-manager (if it is not linked automatically).

Select Your_Xcode_Project/TARGETS/BuildSettings, in Header Search Paths, add "$(SRCROOT)/../node_modules/@videosdk.live/react-native-incall-manager/ios/RNInCallManager"

3. Change the path of react-native-webrtc using the following command:

pod β€˜react-native-webrtc’, :path => β€˜../node_modules/@videosdk.live/react-native-webrtc’
Enter fullscreen mode Exit fullscreen mode

4. Change the version of your platform.

You need to change the platform field in the Podfile to 12.0 or above because react-native-webrtc doesn't support iOS versions earlier than 12.0. Update the line: platform: ios, β€˜12.0’.

5. Install pods.

After updating the version, you need to install the pods by running the following command:

Pod install
Enter fullscreen mode Exit fullscreen mode

6. Add β€œ libreact-native-webrtc.a ” binary.

Add the " libreact-native-webrtc.a" binary to the "Link Binary With Libraries" section in the target of your main project folder.

7. Declare permissions in Info.plist :

Add the following lines to your info.plist file located at (project folder/ios/projectname/info.plist):

<key>NSCameraUsageDescription</key>
<string>Camera permission description</string>
<key>NSMicrophoneUsageDescription</key>
<string>Microphone permission description</string>
Enter fullscreen mode Exit fullscreen mode

ios/projectname/info.plist

Register Service

Register VideoSDK services in your root index.js file for the initialization service.

import { AppRegistry } from "react-native";
import App from "./App";
import { name as appName } from "./app.json";
import { register } from "@videosdk.live/react-native-sdk";

register();

AppRegistry.registerComponent(appName, () => App);
Enter fullscreen mode Exit fullscreen mode

index.js

πŸŽ₯ Essential Steps for Implement the Video Calling Functionality

Step 1: Get started with api.js​

Before moving on, you must create an API request to generate a unique meetingId. You will need an authentication token, which you can create either through the videosdk-rtc-api-server-examples or directly from the VideoSDK Dashboard for developers.

export const token = "<Generated-from-dashbaord>";
// API call to create meeting
export const createMeeting = async ({ token }) => {
  const res = await fetch(`https://api.videosdk.live/v2/rooms`, {
    method: "POST",
    headers: {
      authorization: `${token}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({}),
  });

  const { roomId } = await res.json();
  return roomId;
};
Enter fullscreen mode Exit fullscreen mode

api.js

Step 2: Wireframe App.js with all the components​

To build up a wireframe of App.js, you need to use VideoSDK Hooks and Context Providers. VideoSDK provides MeetingProvider, MeetingConsumer, useMeeting, and useParticipant hooks.

First, you need to understand the Context Provider and Consumer. Context is primarily used when some data needs to be accessible by many components at different nesting levels.

  • MeetingProvider : This is the Context Provider. It accepts value config and token as a prop. The Provider component accepts a value prop to be passed to consuming components that are descendants of this Provider. One Provider can be connected to many consumers. Providers can be nested to override values deeper within the tree.
  • MeetingConsumer : This is the Context Consumer. All consumers that are descendants of a Provider will re-render whenever the Provider’s value prop changes.
  • useMeeting : This is the meeting hook API. It includes all the information related to meetings such as join, leave, enable/disable the mic or webcam, etc.
  • useParticipant : This is the participant hook API. It is responsible for handling all the events and props related to one particular participant such as name , webcamStream , micStream , etc.

The Meeting Context provides a way to listen for any changes that occur when a participant joins the meeting or makes modifications to their microphone, camera, and other settings.

Begin by making a few changes to the code in the App.js file.

import React, { useState } from "react";
import {
  SafeAreaView,
  TouchableOpacity,
  Text,
  TextInput,
  View,
  FlatList,
} from "react-native";
import {
  MeetingProvider,
  useMeeting,
  useParticipant,
  MediaStream,
  RTCView,
} from "@videosdk.live/react-native-sdk";
import { createMeeting, token } from "./api";

function JoinScreen(props) {
  return null;
}

function ControlsContainer() {
  return null;
}

function MeetingView() {
  return null;
}

export default function App() {
  const [meetingId, setMeetingId] = useState(null);

  const getMeetingId = async (id) => {
    const meetingId = id == null ? await createMeeting({ token }) : id;
    setMeetingId(meetingId);
  };

  return meetingId ? (
    <SafeAreaView style={{ flex: 1, backgroundColor: "#F6F6FF" }}>
      <MeetingProvider
        config={{
          meetingId,
          micEnabled: false,
          webcamEnabled: true,
          name: "Test User",
        }}
        token={token}
      >
        <MeetingView />
      </MeetingProvider>
    </SafeAreaView>
  ) : (
    <JoinScreen getMeetingId={getMeetingId} />
  );
}
Enter fullscreen mode Exit fullscreen mode

App.js

Step 3: Implement Join Screen​

The join screen will serve as a medium to either schedule a new meeting or join an existing one.

function JoinScreen(props) {
  const [meetingVal, setMeetingVal] = useState("");
  return (
    <SafeAreaView
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        paddingHorizontal: 6 * 10,
      }}
    >
      <TouchableOpacity
        onPress={() => {
          props.getMeetingId();
        }}
        style={{ backgroundColor: "#1178F8", padding: 12, borderRadius: 6 }}
      >
        <Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}>
          Create Meeting
        </Text>
      </TouchableOpacity>

      <Text
        style={{
          alignSelf: "center",
          fontSize: 22,
          marginVertical: 16,
          fontStyle: "italic",
          color: "grey",
        }}
      >
        ---------- OR ----------
      </Text>
      <TextInput
        value={meetingVal}
        onChangeText={setMeetingVal}
        placeholder={"XXXX-XXXX-XXXX"}
        style={{
          padding: 12,
          borderWidth: 1,
          borderRadius: 6,
          fontStyle: "italic",
        }}
      />
      <TouchableOpacity
        style={{
          backgroundColor: "#1178F8",
          padding: 12,
          marginTop: 14,
          borderRadius: 6,
        }}
        onPress={() => {
          props.getMeetingId(meetingVal);
        }}
      >
        <Text style={{ color: "white", alignSelf: "center", fontSize: 18 }}>
          Join Meeting
        </Text>
      </TouchableOpacity>
    </SafeAreaView>
  );
}
Enter fullscreen mode Exit fullscreen mode

JoinScreen Component

Step 4: Implement Controls​

The next step is to create a ControlsContainer component to manage features such as Join or leave a Meeting and Enable or Disable the Webcam/Mic.

In this step, the useMeeting hook is utilized to acquire all the required methods such as join(), leave(), toggleWebcam and toggleMic.

const Button = ({ onPress, buttonText, backgroundColor }) => {
  return (
    <TouchableOpacity
      onPress={onPress}
      style={{
        backgroundColor: backgroundColor,
        justifyContent: "center",
        alignItems: "center",
        padding: 12,
        borderRadius: 4,
      }}
    >
      <Text style={{ color: "white", fontSize: 12 }}>{buttonText}</Text>
    </TouchableOpacity>
  );
};

function ControlsContainer({ join, leave, toggleWebcam, toggleMic }) {
  return (
    <View
      style={{
        padding: 24,
        flexDirection: "row",
        justifyContent: "space-between",
      }}
    >
      <Button
        onPress={() => {
          join();
        }}
        buttonText={"Join"}
        backgroundColor={"#1178F8"}
      />
      <Button
        onPress={() => {
          toggleWebcam();
        }}
        buttonText={"Toggle Webcam"}
        backgroundColor={"#1178F8"}
      />
      <Button
        onPress={() => {
          toggleMic();
        }}
        buttonText={"Toggle Mic"}
        backgroundColor={"#1178F8"}
      />
      <Button
        onPress={() => {
          leave();
        }}
        buttonText={"Leave"}
        backgroundColor={"#FF0000"}
      />
    </View>
  );
}
Enter fullscreen mode Exit fullscreen mode

ControlsContainer Component

function ParticipantList() {
  return null;
}
function MeetingView() {
  const { join, leave, toggleWebcam, toggleMic, meetingId } = useMeeting({});

  return (
    <View style={{ flex: 1 }}>
      {meetingId ? (
        <Text style={{ fontSize: 18, padding: 12 }}>
          Meeting Id :{meetingId}
        </Text>
      ) : null}
      <ParticipantList />
      <ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      />
    </View>
  );
}
Enter fullscreen mode Exit fullscreen mode

MeetingView Component

Step 5: Render Participant List​

After implementing the controls, the next step is to render the joined participants.

You can get all the joined participants from the useMeeting Hook.

function ParticipantView() {
  return null;
}

function ParticipantList({ participants }) {
  return participants.length > 0 ? (
    <FlatList
      data={participants}
      renderItem={({ item }) => {
        return <ParticipantView participantId={item} />;
      }}
    />
  ) : (
    <View
      style={{
        flex: 1,
        backgroundColor: "#F6F6FF",
        justifyContent: "center",
        alignItems: "center",
      }}
    >
      <Text style={{ fontSize: 20 }}>Press Join button to enter meeting.</Text>
    </View>
  );
}
Enter fullscreen mode Exit fullscreen mode

ParticipantList Component

function MeetingView() {
  // Get `participants` from useMeeting Hook
  const { join, leave, toggleWebcam, toggleMic, participants } = useMeeting({});
  const participantsArrId = [...participants.keys()];

  return (
    <View style={{ flex: 1 }}>
      <ParticipantList participants={participantsArrId} />
      <ControlsContainer
        join={join}
        leave={leave}
        toggleWebcam={toggleWebcam}
        toggleMic={toggleMic}
      />
    </View>
  );
}
Enter fullscreen mode Exit fullscreen mode

MeetingView Component

Step 6: Handling Participant's Media​

Before Handling the Participant's Media, you need to understand a couple of concepts.

1. useParticipant Hook

The useParticipant hook is responsible for handling all the properties and events of one particular participant who joined the meeting. It will take participantId as argument.

const { webcamStream, webcamOn, displayName } = useParticipant(participantId);
Enter fullscreen mode Exit fullscreen mode

useParticipant Hook Example

2. MediaStream API

The MediaStream API is beneficial for adding a MediaTrack to the RTCView component, enabling the playback of audio or video.

<RTCView
  streamURL={new MediaStream([webcamStream.track]).toURL()}
  objectFit={"cover"}
  style={{
    height: 300,
    marginVertical: 8,
    marginHorizontal: 8,
  }}
/>
Enter fullscreen mode Exit fullscreen mode

useParticipant Hook Example

Rendering Participant Media

function ParticipantView({ participantId }) {
  const { webcamStream, webcamOn } = useParticipant(participantId);

  return webcamOn && webcamStream ? (
    <RTCView
      streamURL={new MediaStream([webcamStream.track]).toURL()}
      objectFit={"cover"}
      style={{
        height: 300,
        marginVertical: 8,
        marginHorizontal: 8,
      }}
    />
  ) : (
    <View
      style={{
        backgroundColor: "grey",
        height: 300,
        justifyContent: "center",
        alignItems: "center",
      }}
    >
      <Text style={{ fontSize: 16 }}>NO MEDIA</Text>
    </View>
  );
}
Enter fullscreen mode Exit fullscreen mode

ParticipantView Component

Congratulations! By following these steps, you're on your way to unlocking the video within your application. Now, we are moving forward to integrate the RTMP feature that builds immersive video experiences for your users!

Integrate RTMP Live Stream Feature

RTMP is a widely used protocol for live streaming video content from VideoSDK to platforms like YouTube, Twitch, Facebook, and others.

To initiate live streaming from VideoSDK to platforms supporting RTMP ingestion, you simply need to provide the platform-specific stream key and stream URL. This enables VideoSDK to connect to the platform's RTMP server and transmit the live video stream.

Furthermore, VideoSDK offers flexibility in configuring livestream layouts. You can achieve this by either selecting different prebuilt layouts in the configuration or by providing your own custom template for livestreaming, catering to your specific layout preferences.

This guide will provide an overview of how to implement starting and stopping RTMP live streaming with VideoSDK.

Start Livestream

The startLivestream() method, accessible from the useMeeting hook, is used to initiate the RTMP live stream of a meeting. This method accepts the following two parameters:

  • 1. outputs: This parameter takes an array of objects containing the RTMP url and streamKey specific to the platform where you want to initiate the live stream.
  • 2. config (optional): This parameter defines the layout configuration for the live stream.
const config = {
  // Layout Configuration
  layout: {
    type: "GRID", // "SPOTLIGHT" | "SIDEBAR", Default : "GRID"
    priority: "SPEAKER", // "PIN", Default : "SPEAKER"
    gridSize: 4, // MAX : 4
  },

  // Theme of RTMP
  theme: "DARK", // "LIGHT" | "DEFAULT"
};

const outputs = [
  {
    url: "<RTMP_URL>",
    streamKey: "<RTMP_STREAM_KEY>",
  },
];

startLivestream(outputs, config);
Enter fullscreen mode Exit fullscreen mode

Stop Livestream

The stopLivestream() method, accessible from the useMeeting hook is used to stop the RTMP live stream of a meeting.

import { useMeeting } from "@videosdk.live/react-native-sdk";
import { TouchableOpacity, Text } from "react-native";

const MeetingView = () => {
  const { startLivestream, stopLivestream } = useMeeting();

  const handleStartLivestream = () => {
    // Start Livestream
    startLivestream(
      [
        {
          url: "rtmp://a.rtmp.youtube.com/live2",
          streamKey: "key",
        },
      ],
      {
        layout: {
          type: "GRID",
          priority: "SPEAKER",
          gridSize: 4,
        },
        theme: "DARK",
      }
    );
  };

  const handleStopLivestream = () => {
    // Stop Livestream
    stopLivestream();
  };

  return (
    <>
      <TouchableOpacity
        onPress={() => {
          handleStartLivestream();
        }}
      >
        <Text>Start Livestream</Text>
      </TouchableOpacity>

      <TouchableOpacity
        onPress={() => {
          handleStopLivestream();
        }}
      >
        <Text>Stop Livestream</Text>
      </TouchableOpacity>
    </>
  );
};
Enter fullscreen mode Exit fullscreen mode

Event associated with Livestream​

  • onLivestreamStateChanged - The onLivestreamStateChanged() event is triggered whenever the state of the meeting livestream changes.
import { Constants, useMeeting } from "@videosdk.live/react-native-sdk";

function onLivestreamStateChanged(data) {
  const { status } = data;

  if (status === Constants.livestreamEvents.LIVESTREAM_STARTING) {
    console.log("Meeting livestream is starting");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STARTED) {
    console.log("Meeting livestream is started");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STOPPING) {
    console.log("Meeting livestream is stopping");
  } else if (status === Constants.livestreamEvents.LIVESTREAM_STOPPED) {
    console.log("Meeting livestream is stopped");
  } else {
    //
  }
 }

const {
  meetingId
  ...
} = useMeeting({
  onLivestreamStateChanged,
});
Enter fullscreen mode Exit fullscreen mode

Custom Template​

With VideoSDK, you can also use your custom-designed layout template to record the meetings. To use the custom template, you need to create a template, for which you can follow this guide. Once you have set up the template, you can use the REST API to start the recording with the templateURL parameter.

πŸ”š Conclusion

Integrating RTMP functionality into your React Native app with videoSDK empowers you to create a more versatile and engaging user experience. This guide provided a roadmap for setting up the development environment, configuring the SDK, and establishing a connection to your RTMP server. With RTMP support, you can enable features like live streaming to external platforms or broadcasting calls to a wider audience.

Remember, VideoSDK offers a generous free tier that includes 10,000 minutes allowing you to experiment and build your video call application without initial investment. Start your free trial today and unlock the potential of RTMP streaming in your React Native app!

Top comments (0)