2014年03月

Question

water drainage question


Is a village or city allowed to drain rain runoff from an addition onto private property?



Answer

Re: water drainage question


Thank you for your question. The best answer I can give you at this point is "maybe". I wish I could be more definitive.



The municipal body is emptying runoff onto your property? Was that happening when you first acquired the property, or was it the result of some construction or other change made to the adjacent property. It would also be helpful to see your deed.



I would be interested in speaking with you further. Please feel free to call or email me; [email protected]
/* */



Thanks -



Answer

Re: water drainage question


No, they should not be allowed to do this unless they have prior permission.





Question

Evidentiary Sanction


I had been going to court for the last six month in order to compel my ex-wife to produce the financial record. Since she did not comply with multiple requests by court to produce the financial documents, the court finally ordered an evidentiary sanction against her. The court also granted my compel motion.



The question now is what to do with these orders.


1. How can I get the documents from her?


2. If she does not produce the documents, how will I know how much money she is hiding and how will I proceed to the final trial and division of the property?



I would really appreciate your help. I am representing myself. Thank you.



Regards,


AJ



Answer

Re: Evidentiary Sanction


There are many different avenues for you to go, but without reviewing the judges order and the specifics of your financial situation it is impossble to give you good advice. I would suggest that you retain an attorney, the investment will probably pay for itself in the long run. Good Luck, Pat McCrary



Answer

Re: Evidentiary Sanction


try to find another way of proving whatever you're trying to prove. if you have first hand knowledge, you can testify and she will likely be precluded from bringing any contradictory evidence.





Hadoop - a reducer is not being initiated

I am trying to run open source kNN join MapReduce hbrj algorithm on a Hadoop 2.6.0 for single node cluster - pseudo-distributed operation installed on my laptop (OSX). This is the code.



Mapper, reducer and the main driver:



public class RPhase2 extends Configured implements Tool 
{
public static class MapClass extends MapReduceBase
implements Mapper<LongWritable, Text, IntWritable, RPhase2Value>
{
public void map(LongWritable key, Text value,
OutputCollector<IntWritable, RPhase2Value> output,
Reporter reporter) throws IOException
{
String line = value.toString();
String[] parts = line.split(" +");
// key format <rid1>
IntWritable mapKey = new IntWritable(Integer.valueOf(parts[0]));
// value format <rid2, dist>
RPhase2Value np2v = new RPhase2Value(Integer.valueOf(parts[1]), Float.valueOf(parts[2]));
System.out.println("############### key: " + mapKey.toString() + " np2v: " + np2v.toString());
output.collect(mapKey, np2v);
}
}

public static class Reduce extends MapReduceBase
implements Reducer<IntWritable, RPhase2Value, NullWritable, Text>
{
int numberOfPartition;
int knn;

class Record {...}

class RecordComparator implements Comparator<Record> {...}

public void configure(JobConf job)
{
numberOfPartition = job.getInt("numberOfPartition", 2);
knn = job.getInt("knn", 3);
System.out.println("########## configuring!");
}

public void reduce(IntWritable key, Iterator<RPhase2Value> values,
OutputCollector<NullWritable, Text> output,
Reporter reporter) throws IOException
{
//initialize the pq
RecordComparator rc = new RecordComparator();
PriorityQueue<Record> pq = new PriorityQueue<Record>(knn + 1, rc);

System.out.println("Phase 2 is at reduce");
System.out.println("########## key: " + key.toString());

// For each record we have a reduce task
// value format <rid1, rid2, dist>
while (values.hasNext())
{
RPhase2Value np2v = values.next();

int id2 = np2v.getFirst().get();
float dist = np2v.getSecond().get();
Record record = new Record(id2, dist);
pq.add(record);
if (pq.size() > knn)
pq.poll();
}

while(pq.size() > 0)
{
output.collect(NullWritable.get(), new Text(key.toString() + " " + pq.poll().toString()));
//break; // only ouput the first record
}

} // reduce
} // Reducer

public int run(String[] args) throws Exception {
JobConf conf = new JobConf(getConf(), RPhase2.class);
conf.setJobName("RPhase2");

conf.setMapOutputKeyClass(IntWritable.class);
conf.setMapOutputValueClass(RPhase2Value.class);

conf.setOutputKeyClass(NullWritable.class);
conf.setOutputValueClass(Text.class);

conf.setMapperClass(MapClass.class);
conf.setReducerClass(Reduce.class);

int numberOfPartition = 0;
List<String> other_args = new ArrayList<String>();

for(int i = 0; i < args.length; ++i)
{
try {
if ("-m".equals(args[i])) {
//conf.setNumMapTasks(Integer.parseInt(args[++i]));
++i;
} else if ("-r".equals(args[i])) {
conf.setNumReduceTasks(Integer.parseInt(args[++i]));
} else if ("-p".equals(args[i])) {
numberOfPartition = Integer.parseInt(args[++i]);
conf.setInt("numberOfPartition", numberOfPartition);
} else if ("-k".equals(args[i])) {
int knn = Integer.parseInt(args[++i]);
conf.setInt("knn", knn);
System.out.println(knn + "~ hi");
} else {
other_args.add(args[i]);
}
conf.setNumReduceTasks(numberOfPartition * numberOfPartition);
//conf.setNumReduceTasks(1);
} catch (NumberFormatException except) {
System.out.println("ERROR: Integer expected instead of " + args[i]);
return printUsage();
} catch (ArrayIndexOutOfBoundsException except) {
System.out.println("ERROR: Required parameter missing from " + args[i-1]);
return printUsage();
}
}


FileInputFormat.setInputPaths(conf, other_args.get(0));
FileOutputFormat.setOutputPath(conf, new Path(other_args.get(1)));

JobClient.runJob(conf);
return 0;
}

public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new RPhase2(), args);
}
} // RPhase2


When I run this the mapper is successful but the job terminates suddenly, and the reducer never instantiated. Moreover, no errors are ever printed (even in the log files). I know that also because the print statements in the configuration of the Reducer never get printed. Output:



15/06/15 14:00:37 INFO mapred.LocalJobRunner: map task executor complete.
15/06/15 14:00:38 INFO mapreduce.Job: map 100% reduce 0%
15/06/15 14:00:38 INFO mapreduce.Job: Job job_local833125918_0001 completed successfully
15/06/15 14:00:38 INFO mapreduce.Job: Counters: 20
File System Counters
FILE: Number of bytes read=12505456
FILE: Number of bytes written=14977422
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=11408
HDFS: Number of bytes written=8724
HDFS: Number of read operations=216
HDFS: Number of large read operations=0
HDFS: Number of write operations=99
Map-Reduce Framework
Map input records=60
Map output records=60
Input split bytes=963
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=14
Total committed heap usage (bytes)=1717567488
File Input Format Counters
Bytes Read=2153
File Output Format Counters
Bytes Written=1645


What I have done so far:




I have been looking at similar questions, and I found the most frequent problem is not configuring the output formats when the output of the mapper and reducer are different which is done in the code above: conf.setMapOutputKeyClass(Class); conf.setMapOutputValueClass(Class);


In another post I found a suggestion to change reduce(..., Iterator <...>, ...) to (..., Iterable <...>, ...) which gave me trouble compiling. I could no longer use .getNext() and .next() methods as well as got this error:



error: Reduce is not abstract and does not override abstract method reduce(IntWritable,Iterator,OutputCollector,Reporter) in Reducer




If anyone has any hints or suggestions on what I can try to find what the issue is I would be very appreciative!



Just a note that I have posted a question about my problem before in here (Hadoop kNN join algorithm stuck at map 100% reduce 0%) but it did not get enough attention so I wanted to re-ask this from a different perspective. You could use this link for more details on my log files.



Answers

I have figured out the problem and it was something silly. If you notice in the code above, numberOfPartition is set to 0 before the arguments are read, and the number of reducers are set to numberOfPartition * numberOfPartition. I, as the user did not change the number of partitions parameter (mostly because I simply copy pasted the argument line from their provided README) so that's why the reducer never even started.





Unable to Add Image to Document Using DocX4J

I'm trying to add Image to Document (.docx) using Docx4j library with code like below. The image already exists in local machine and initially i taught it doesn't support png and then i have renamed the image to jpg and it still throws error



String userSignatureFile = "C:\\esignature\\sign.jpg";

// read the signature image into Bytes

InputStream inputStream = new java.io.FileInputStream(userSignatureFile);
long fileLength = userSignatureFile.length();

byte[] bytes = new byte[(int)fileLength];

int offset = 0;
int numRead = 0;

while(offset < bytes.length
&& (numRead = inputStream.read(bytes, offset, bytes.length-offset)) >= 0) {
offset += numRead;
}

inputStream.close();

String filenameHint = null;
String altText = null;

int id1 = 0;
int id2 = 1;

// create Inline Image

BinaryPartAbstractImage imagePart = BinaryPartAbstractImage.createImagePart(wordPackage, bytes);
Inline inline = imagePart.createImageInline( filenameHint, altText, id1, id2);

// Create Drawing and add to Run
Drawing imageDrawing = factory.createDrawing();
imageDrawing.getAnchorOrInline().add(inline);
// add Text to Run
run.getContent().add(imageDrawing);

// add Run to Paragraph
((P) jaxbNode).getContent().add(run);


And below is the error message



    Exception in thread "main" org.docx4j.openpackaging.exceptions.Docx4JException: Error checking image format
at org.docx4j.openpackaging.parts.WordprocessingML.BinaryPartAbstractImage.ensureFormatIsSupported(BinaryPartAbstractImage.java:429)
at org.docx4j.openpackaging.parts.WordprocessingML.BinaryPartAbstractImage.ensureFormatIsSupported(BinaryPartAbstractImage.java:331)
at org.docx4j.openpackaging.parts.WordprocessingML.BinaryPartAbstractImage.createImagePart(BinaryPartAbstractImage.java:225)
at org.docx4j.openpackaging.parts.WordprocessingML.BinaryPartAbstractImage.createImagePart(BinaryPartAbstractImage.java:144)

Caused by: java.io.IOException: Cannot run program "imconvert": CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(Unknown Source)
at java.lang.Runtime.exec(Unknown Source)
at java.lang.Runtime.exec(Unknown Source)
at java.lang.Runtime.exec(Unknown Source)
at org.docx4j.openpackaging.parts.WordprocessingML.BinaryPartAbstractImage.convertToPNG(BinaryPartAbstractImage.java:905)
at org.docx4j.openpackaging.parts.WordprocessingML.BinaryPartAbstractImage.ensureFormatIsSupported(BinaryPartAbstractImage.java:413)
... 6 more
Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(Unknown Source)
at java.lang.ProcessImpl.start(Unknown Source)
... 12 more


Answers

Actually its my mistake where i used input stream and directly passed the file path string (without File ) and after correcting as below, it worked.



Correct



File file = new File(userSignatureFile);

// read the signature image into Bytes

InputStream inputStream = new java.io.FileInputStream(file);


Wrong



        InputStream inputStream = new java.io.FileInputStream(userSignatureFile);




CouchDB: bulk_docs returning incorrect status code

I'm working on syncing a PouchDB database (with Angular) with a CouchDB database.



When the replication is in progress, the code is issuing a POST request to do a bulk update to http://127.0.0.1:5984/testdb/_bulk_docs.



I have a validation rule on database to reject unauthorized writes, and it generates a forbidden error. Therefore, the server is responding with a JSON response as [{"id":"0951db944e729c981ad3964c22002d55","rev":"8-ccdcb52743cae43c5870113f09f2e25a","error":"forbidden","reason":"Not Authorized"}]



According to the docs (at the end of the page), the above response should generate a 417 Expectation Failed status code. However, it currently generates a 201 Created status code.



Because of the incorrect response code, the client (PouchDB) shows as all records synced, but the updates are not written to the server (CouchDB).



Is there a config option to change this status code?



Fore reference, my validate_doc_update function is as following.



function(newDoc, oldDoc, userCtx){ 
if (!userCtx) throw({forbidden: 'Need a user to update'});

if((userCtx.roles.indexOf('_admin') == -1) && (userCtx.roles.indexOf('backend:manager') == -1)){
throw({forbidden: "Not Authorized"});
}
}


Answers

The 417:expectation failed status code only works when the all_or_nothing parameter is set to true. By default this parameter is false.



The default bulk update transaction mode in couchdb is non atomic which guarantees that only some of the documents will be saved. If the document is not saved the api returns an error object like you got along with a list of documents that were in fact successfully saved. So 201 seems to be the correct response.



Then you've got to walk through the response to find which documents failed and manually update them.



In case of all_or_nothing mode however a success will be returned only if all the documents have been updated.



While syncing you can also use the _replicate endpoint that has many other features that bulk update does not have.





↑このページのトップヘ