OVH Guides

Submitting a Java/Scala job to Data Processing platform using OVHcloud manager

Find out how to create a cluster and run your Apache Spark Java/Scala job with Data Processing platform using OVHcloud Manager

Last updated 04th May, 2020

Objective

This guide helps you to upload an Apache Spark job in Java or Scala to your OVHcloud Object Storage and run your job with Data Processing using OVHcloud Manager.

If you would like to submit an Apache Spark job in Python language, you can read this document: How to submit a Python job to Data Processing using OVHcloud Manager

In this guide, we are assuming that you're using the OVHcloud Manager to use Data Processing platform.

To read an introduction about Data Processing service you can visit Data Processing Overview.

Requirements

Instructions

Step 1: Upload your application code to Object Storage

Before running your job in Data Processing platform, you will need to create a container in OVHcloud Object Storage for your job and upload your application jar file into this container. You can work with your Object Storage using either the OVHcloud Manager or the Openstack Horizon dashboard.

Please see Creating Storage Containers in Customer Panel or Create an object container in Horizon for more details.

If you don’t currently have an application code and you still would like to try OVHcloud Data Processing, you can download an Apache Spark package and extract it. Inside, you can find a jar file in examples/jars folder to run the SparkPi sample (which will just compute the Pi value).

Step 2: Submit your Spark job

To submit your job with your required parameters follow these steps:

  • Login to the OVHcloud Manager and select Public Cloud
  • Select the relevant project if you have multiple projects in your OVHcloud account
  • Select Data Processing from the left panel
  • Select Submit a new job

Data Processing Manager

Step 3: Check information, status and logs of a job

In the Data Processing section of Manager you can see the list of all the jobs that you have submitted so far. If you click on a job's name, you can see detailed information on it, including its status. Then you can click on the Logs to see the live logs while the job is running.

If your jobs are stuck in "Running", you probably forgot to stop the spark context in your code. To stop it, please refer to the java spark context documentation.

Once the job will be finished, the complete logs will be saved to your Object Storage container. You will be able to download it from your account whenever you would like.

Please see How to check your job's logs in the Data Processing manager page for more details.

Step 4: Check your job's results

After your Spark job is finished, you will be able to check the results from your logs as well as in any connected storage your job was designed to update.

Go further

To learn more about using Data Processing and how to create cluster and process your data, we invite you to look at Data Processing documentations page.

You can send your questions, suggestions or feedbacks in our community of users on https://community.ovh.com/en/ or in our public Gitter


These guides might also interest you...