Discussion:
SIGSEGV error - StubRoutines::jlong_disjoint_arraycopy
Add Reply
Lee, David
2017-06-15 14:19:53 UTC
Reply
Permalink
Raw Message
Starting last week we started seeing the error below which terminates the drill service pid.. My research suggests that it is a space issue with /tmp, but we have plenty of free space..

I'm using dfs.tmp to convert JSON files (2 to 3 gig each) into Parquet..

Anyone encounter this issue before? I've tried using different versions of the JVM with no success..

Output from drillbit_out

672 values, 512,796B raw, 465,932B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY], dic { 3,376 entries, 27,008B raw, 3,376B comp}
Jun 15, 2017 1:39#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f724c052570, pid=33814, tid=140128310007552
#
# JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# v ~StubRoutines::jlong_disjoint_arraycopy
#
# Core dump written. Default location: /usr/hdp/2.3.4.0-3485/core or core.33814
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid33814.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#

Changed parquet settings:

store.parquet.block-size CHANGED 134217728 < Matches HDFS block size. I FTP the files from DFS to HDFS afterwards.
store.parquet.enable_dictionary_encoding CHANGED true < Dictionary Encoding turned on.


This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information. Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.

For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.

© 2017 BlackRock
Kunal Khatua
2017-06-15 17:52:27 UTC
Reply
Permalink
Raw Message
Nope.. not seen this before.

Can you share more details of the log messages, etc? The problem might have to do with the JSON files being very large... because the segmentation fault that triggered the JVM (Drillbit) crash hints at that during the write of the Parquet files.

I take it you are writing it to the local FS's temp space before moving it into HDFS. Is there a reason you're not directly writing it into HDFS?



-----Original Message-----
From: Lee, David [mailto:***@blackrock.com]
Sent: Thursday, June 15, 2017 7:20 AM
To: ***@drill.apache.org
Subject: SIGSEGV error - StubRoutines::jlong_disjoint_arraycopy

Starting last week we started seeing the error below which terminates the drill service pid.. My research suggests that it is a space issue with /tmp, but we have plenty of free space..

I'm using dfs.tmp to convert JSON files (2 to 3 gig each) into Parquet..

Anyone encounter this issue before? I've tried using different versions of the JVM with no success..

Output from drillbit_out

672 values, 512,796B raw, 465,932B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY], dic { 3,376 entries, 27,008B raw, 3,376B comp} Jun 15, 2017 1:39# # A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f724c052570, pid=33814, tid=140128310007552 # # JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 compressed oops) # Problematic frame:
# v ~StubRoutines::jlong_disjoint_arraycopy
#
# Core dump written. Default location: /usr/hdp/2.3.4.0-3485/core or core.33814 # # An error report file with more information is saved as:
# /tmp/hs_err_pid33814.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#

Changed parquet settings:

store.parquet.block-size CHANGED 134217728 < Matches HDFS block size. I FTP the files from DFS to HDFS afterwards.
store.parquet.enable_dictionary_encoding CHANGED true < Dictionary Encoding turned on.


This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information. Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.

For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.

© 201
Lee, David
2017-06-15 18:15:09 UTC
Reply
Permalink
Raw Message
Yeah. It only crashes on the larger JSON files. Reworking my python script to use hdfs.tmp instead of dfs.tmp now..

-----Original Message-----
From: Kunal Khatua [mailto:***@mapr.com]
Sent: Thursday, June 15, 2017 10:52 AM
To: ***@drill.apache.org
Subject: RE: SIGSEGV error - StubRoutines::jlong_disjoint_arraycopy

Nope.. not seen this before.



Can you share more details of the log messages, etc? The problem might have to do with the JSON files being very large... because the segmentation fault that triggered the JVM (Drillbit) crash hints at that during the write of the Parquet files.



I take it you are writing it to the local FS's temp space before moving it into HDFS. Is there a reason you're not directly writing it into HDFS?







-----Original Message-----

From: Lee, David [mailto:***@blackrock.com]

Sent: Thursday, June 15, 2017 7:20 AM

To: ***@drill.apache.org

Subject: SIGSEGV error - StubRoutines::jlong_disjoint_arraycopy



Starting last week we started seeing the error below which terminates the drill service pid.. My research suggests that it is a space issue with /tmp, but we have plenty of free space..



I'm using dfs.tmp to convert JSON files (2 to 3 gig each) into Parquet..



Anyone encounter this issue before? I've tried using different versions of the JVM with no success..



Output from drillbit_out



672 values, 512,796B raw, 465,932B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY], dic { 3,376 entries, 27,008B raw, 3,376B comp} Jun 15, 2017 1:39# # A fatal error has been detected by the Java Runtime Environment:

#

# SIGSEGV (0xb) at pc=0x00007f724c052570, pid=33814, tid=140128310007552 # # JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 compressed oops) # Problematic frame:

# v ~StubRoutines::jlong_disjoint_arraycopy

#

# Core dump written. Default location: /usr/hdp/2.3.4.0-3485/core or core.33814 # # An error report file with more information is saved as:

# /tmp/hs_err_pid33814.log

#

# If you would like to submit a bug report, please visit:

# https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.java.com_bugreport_crash.jsp&d=DwIGaQ&c=zUO0BtkCe66yJvAZ4cAvZg&r=SpeiLeBTifecUrj1SErsTRw4nAqzMxT043sp_gndNeI&m=3o-Ni2O9IxzjKYD0rrR85U9S5nfCkP4VO_0OCXeG-F8&s=VeLKaudP4KpTyGv89IL1qGMheK1BHXQepjXk-D5rHiw&e=

#



Changed parquet settings:



store.parquet.block-size CHANGED 134217728 < Matches HDFS block size. I FTP the files from DFS to HDFS afterwards.
https://urldefense.proofpoint.com/v2/url?u=http-3A__store.pa&d=DwIGaQ&c=zUO0BtkCe66yJvAZ4cAvZg&r=SpeiLeBTifecUrj1SErsTRw4nAqzMxT043sp_gndNeI&m=3o-Ni2O9IxzjKYD0rrR85U9S5nfCkP4VO_0OCXeG-F8&s=GvydEHpebGSWx_GFxPWSRZp9SQa7Z4hkrOtKl6YtKCc&e= rquet.enable_dictionary_encoding CHANGED true < Dictionary Encoding turned on.





This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information. Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.



For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.
Kunal Khatua
2017-06-15 18:35:58 UTC
Reply
Permalink
Raw Message
I'm skeptical of preventing the segfault simply by the switch to HDFS as the target storage. It'll at most help you avoid the need for FTPing (which is anyway a saving IMHO)


You might need to increase memory allocation for Drill. See if JConsole can reveal whether the Heap memory hits a limit. Based on that, you'll need to experiment with increasing the Heap memory or the Direct memory for the Drillbit.





________________________________
From: Lee, David <***@blackrock.com>
Sent: Thursday, June 15, 2017 11:15:09 AM
To: ***@drill.apache.org
Subject: RE: SIGSEGV error - StubRoutines::jlong_disjoint_arraycopy

Yeah. It only crashes on the larger JSON files. Reworking my python script to use hdfs.tmp instead of dfs.tmp now..

-----Original Message-----
From: Kunal Khatua [mailto:***@mapr.com]
Sent: Thursday, June 15, 2017 10:52 AM
To: ***@drill.apache.org
Subject: RE: SIGSEGV error - StubRoutines::jlong_disjoint_arraycopy

Nope.. not seen this before.



Can you share more details of the log messages, etc? The problem might have to do with the JSON files being very large... because the segmentation fault that triggered the JVM (Drillbit) crash hints at that during the write of the Parquet files.



I take it you are writing it to the local FS's temp space before moving it into HDFS. Is there a reason you're not directly writing it into HDFS?







-----Original Message-----

From: Lee, David [mailto:***@blackrock.com]

Sent: Thursday, June 15, 2017 7:20 AM

To: ***@drill.apache.org

Subject: SIGSEGV error - StubRoutines::jlong_disjoint_arraycopy



Starting last week we started seeing the error below which terminates the drill service pid.. My research suggests that it is a space issue with /tmp, but we have plenty of free space..



I'm using dfs.tmp to convert JSON files (2 to 3 gig each) into Parquet..



Anyone encounter this issue before? I've tried using different versions of the JVM with no success..



Output from drillbit_out



672 values, 512,796B raw, 465,932B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY], dic { 3,376 entries, 27,008B raw, 3,376B comp} Jun 15, 2017 1:39# # A fatal error has been detected by the Java Runtime Environment:

#

# SIGSEGV (0xb) at pc=0x00007f724c052570, pid=33814, tid=140128310007552 # # JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 compressed oops) # Problematic frame:

# v ~StubRoutines::jlong_disjoint_arraycopy

#

# Core dump written. Default location: /usr/hdp/2.3.4.0-3485/core or core.33814 # # An error report file with more information is saved as:

# /tmp/hs_err_pid33814.log

#

# If you would like to submit a bug report, please visit:

# https://urldefense.proofpoint.com/v2/url?u=http-3A__bugreport.java.com_bugreport_crash.jsp&d=DwIGaQ&c=zUO0BtkCe66yJvAZ4cAvZg&r=SpeiLeBTifecUrj1SErsTRw4nAqzMxT043sp_gndNeI&m=3o-Ni2O9IxzjKYD0rrR85U9S5nfCkP4VO_0OCXeG-F8&s=VeLKaudP4KpTyGv89IL1qGMheK1BHXQepjXk-D5rHiw&e=

#



Changed parquet settings:



store.parquet.block-size CHANGED 134217728 < Matches HDFS block size. I FTP the files from DFS to HDFS afterwards.
https://urldefense.proofpoint.com/v2/url?u=http-3A__store.pa&d=DwIGaQ&c=zUO0BtkCe66yJvAZ4cAvZg&r=SpeiLeBTifecUrj1SErsTRw4nAqzMxT043sp_gndNeI&m=3o-Ni2O9IxzjKYD0rrR85U9S5nfCkP4VO_0OCXeG-F8&s=GvydEHpebGSWx_GFxPWSRZp9SQa7Z4hkrOtKl6YtKCc&e= rquet.enable_dictionary_encoding CHANGED true < Dictionary Encoding turned on.





This message may contain information that is confidential or privileged. If you are not the intended recipient, please advise the sender immediately and delete this message. See http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for further information. Please refer to http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more information about BlackRock’s Privacy Policy.



For a list of BlackRock's office addresses worldwide, see http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.



© 2017 BlackRock, Inc. All rights reserved.

Loading...