2013年7月28日星期日

mahout + oozie

Process process1 = runtime.exec (cmdToBeExec);

InputStream is = process1.getInputStream ();

InputStreamReader isr = new InputStreamReader (is);
BufferedReader br = new BufferedReader (isr);
while ((line = br.readLine ())! = null) {
System.out.println ("Output is:" + line);
}
I wrote such a program, and then configure the workflow, the mahout command line execution of an algorithm executed as a variable transmission in the past,
this is not enforced. Later plus error output process1.geterrorstream (); seemed to begin execution of the
But I jobtracker point of view, has been stuck in a no go down, log in no other errors, is the heartbeat detection
This is how it happened?
------ Solution ---------------------------------------- ----
?This post last edited by the tntzbzc on 2013-05-26 14:57:35 job track and task track are not being given the Log one o'clock?
with HADOOP JAR run mahout can go?

------ Solution ------------------------------------ --------
mahout in multiple iterations JOB what way?
------ Solution ---------------------------------------- ----
java call linux, execute shell commands, there have been problems. Failed to get running.

I write directly jni call to solve.

c / python no problem
------ For reference only ---------------------------- -----------


at the command line to run no problem, use this procedure when performed on the card
no other error log is the heartbeat
I delegated this directive continues, in fact, a mahout in a job with a job that
will not affect it because of this? After all, you really want to prepare this directive began to run
------ For reference only ----------------------------- ----------


estimate is not the reason mahout,
I used a bit oozie + sqooop is the same error, map0%

------ For reference only ---------------------------------- -----


Now the situation is like this, as long as I submit job with oozie
before performing any mapreduce program will be blocked
When I manually kill off the job when submitted oozie
rest mapreduce program will continue down the implementation of the

I find it strange that it can not find a mistake
------ For reference only ----------------------- ----------------

??
Now the situation is like this, as long as I submit job with oozie ??
before performing any mapreduce program will be blocked ??
When I manually kill off the job when submitted oozie ??
rest mapreduce program will continue down the implementation of the ??
??
I find it strange that it can not find a mistake ?

so magical?

1 条评论: