What are the absolute minimum modifications one must make to a Java program to render it suitable for map-reduce?

This is my Java program:

import java.io.*; class evmTest { public static void main(String[] args) { try { Runtime rt = Runtime.getRuntime(); String command = "evm --debug --code 7f00000000000000000000000000000000000000000000000000000000000000027f00000000000000000000000000000000000000000000000000000000000000027f00000000000000000000000000000000000000000000000000000000000000020101 run"; Process proc = rt.exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(proc.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(proc.getErrorStream())); // read the output from the command System.out.println("Here is the standard output of the command:

"); String s = null; while ((s = stdInput.readLine()) != null) { System.out.println(s); } // read any errors from the attempted command System.out.println("Here is the standard error of the command (if any):

"); while ((s = stdError.readLine()) != null) { System.out.println(s); } } catch (IOException e) { System.out.println(e); } } }

It prints output from the terminal, which renders in this way:

Here is the standard output of the command: 0x Here is the standard error of the command (if any): #### TRACE #### PUSH32 pc=00000000 gas=10000000000 cost=3 PUSH32 pc=00000033 gas=9999999997 cost=3 Stack: 00000000 0000000000000000000000000000000000000000000000000000000000000002 PUSH32 pc=00000066 gas=9999999994 cost=3 Stack: 00000000 0000000000000000000000000000000000000000000000000000000000000002 00000001 0000000000000000000000000000000000000000000000000000000000000002 ADD pc=00000099 gas=9999999991 cost=3 Stack: 00000000 0000000000000000000000000000000000000000000000000000000000000002 00000001 0000000000000000000000000000000000000000000000000000000000000002 00000002 0000000000000000000000000000000000000000000000000000000000000002 ADD pc=00000100 gas=9999999988 cost=3 Stack: 00000000 0000000000000000000000000000000000000000000000000000000000000004 00000001 0000000000000000000000000000000000000000000000000000000000000002 STOP pc=00000101 gas=9999999985 cost=0 Stack: 00000000 0000000000000000000000000000000000000000000000000000000000000006 #### LOGS ####

This is, of course, one of the simplest map-reduce jobs, from the Apache examples:

import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }

My question is- what the the most simple way to map-reducify the Java program I shared at the top of this post?

UPDATE

Ran it with this command:

$HADOOP_HOME/bin/hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.8.1.jar -D mapreduce.job.reduces=0 -input /input_0 -output /steaming-output -mapper ./mapper.sh

Resulted in this error:

Started running into problems here:

17/09/26 03:26:56 INFO mapreduce.Job: Task Id : attempt_1506277206531_0004_m_000000_0, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object

Additional information from the server: