Submitting a Java/Scala job to Data Processing platform using OVHcloud Control Panel

Find out how to create a cluster and run your Apache Spark Java/Scala job with Data Processing platform using the OVHcloud Control Panel

Last updated 04th May, 2020


This guide helps you to upload an Apache Spark job in Java or Scala to your OVHcloud Object Storage and run your job with Data Processing using the OVHcloud Control Panel.

If you would like to submit an Apache Spark job in Python language, you can read this document: How to submit a Python job to Data Processing using the OVHcloud Control Panel

In this guide, we are assuming that you're using the OVHcloud Control Panel to use Data Processing platform.

To read an introduction about Data Processing service you can visit Data Processing Overview.



Step 1: Upload your application code to Object Storage

Before running your job in Data Processing platform, you will need to create a container in OVHcloud Object Storage for your job and upload your application jar file into this container. You can work with your Object Storage using either the OVHcloud Control Panel or the Openstack Horizon dashboard.

Please see Creating Storage Containers in the OVHcloud Control Panel or Create an object container in Horizon for more details.

If you don’t currently have an application code and you still would like to try OVHcloud Data Processing, you can download an Apache Spark package and extract it. Inside, you can find a jar file in examples/jars folder to run the SparkPi sample (which will just compute the Pi value).

Once you have a Storage container and an application to run on OVHcloud Data Processing, upload the jar file to the root directory of your object container.

Step 2: Submit your Spark job

To submit your job with your required parameters follow these steps:

  • Login to the OVHcloud Control Panel and select Public Cloud
  • Select the relevant project if you have multiple projects in your OVHcloud account
  • Select Data Processing from the left panel
  • Select Submit a new job

Data Processing Manager

Step 3: Check information, status and logs of a job

In the Data Processing section of Manager you can see the list of all the jobs that you have submitted so far. If you click on a job's name, you can see detailed information on it, including its status. Then you can click on the Logs to see the live logs while the job is running.

If your jobs are stuck in "Running", you probably forgot to stop the spark context in your code. To stop it, please refer to the java spark context documentation.

Once the job will be finished, the complete logs will be saved to your Object Storage container. You will be able to download it from your account whenever you would like.

Please see How to check your job's logs in the Data Processing manager page for more details.

Step 4: Check your job's results

After your Spark job is finished, you will be able to check the results from your logs as well as in any connected storage your job was designed to update.

Go further

To learn more about using Data Processing and how to submit a job and process your data, we invite you to look at Data Processing documentations page.

You can send your questions, suggestions or feedbacks in our community of users on or on our Discord in the channel #dataprocessing-spark

Did you find this guide useful?

Please feel free to give any suggestions in order to improve this documentation.

Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.

Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.

Thank you. Your feedback has been received.

These guides might also interest you...

OVHcloud Community

Access your community space. Ask questions, search for information, post content, and interact with other OVHcloud Community members.

Discuss with the OVHcloud community