Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-03-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 dgandhi joined #gluster
00:34 badone_ joined #gluster
00:37 _polto_ joined #gluster
00:40 badone__ joined #gluster
00:40 bala joined #gluster
00:42 diegows joined #gluster
00:48 topshare joined #gluster
00:54 DV__ joined #gluster
01:18 plarsen joined #gluster
01:20 kripper1 joined #gluster
01:20 kripper1 hi
01:20 glusterbot kripper1: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
01:20 kripper1 does sanlock work with gluster?
01:21 kripper1 Sanlock is giving me "invalid lockspace found -1 failed 0"
01:44 meghanam joined #gluster
01:46 corretico joined #gluster
01:53 nangthang joined #gluster
02:19 topshare joined #gluster
02:19 punit__ joined #gluster
02:20 punit__ Hi all
02:20 punit__ i am facing one strange issue with glusterfs
02:20 punit__ i have setup glusterfs with ovirt in my ENV
02:22 punit__ earlier all the guest VM can create,start & stop but after reboot of one of my glusternode...now none of vm can be poweron....failed with the error...Bad Volume Specification
02:26 topshare joined #gluster
02:28 bharata-rao joined #gluster
02:29 jmarley joined #gluster
02:36 topshare joined #gluster
02:37 haomaiwa_ joined #gluster
02:47 topshare joined #gluster
02:56 wkf joined #gluster
03:01 punit__ vijay, can you help me here
03:07 bharata_ joined #gluster
03:08 punit__ hagarth,is there any one can help me to solve Bad Volume specification issue...
03:10 capri joined #gluster
03:17 hagarth punit__: looks like we need some vdsm help, I have forwarded your query to a few folks who might know better. Please check back later as they will be active then.
03:18 punit__ hagarth,i communicate with ovirt community and they suggest me the following :-  punit_: I read the code snippets about the connection timeout error, it is thrown out by the ioprocess module. And from the traceback in the vdsm log, it was running command 'glob' and timeouted, so the only thing I can come up with is the performance problem of glusterfs.
03:20 hagarth punit__: did you get a chance to check the gluster logs as i mentioned?
03:20 punit__ hagarth,please let me know where i can find the gluster client logs ??
03:21 hagarth punit__: I did reply to your email with that detail earlier
03:21 punit__ ok
03:25 hagarth punit__: please reply with details on the list, I will be away now
03:25 punit__ hagarth,
03:25 punit__ hagarth,ok...i will update you with the log details
03:26 kdhananjay joined #gluster
03:30 punit__ hagarth,please find the logs from here ...http://paste.ubuntu.com/10618869/
03:32 gnudna joined #gluster
03:33 shubhendu joined #gluster
03:34 gnudna guys recommend a particular version of gluster which is stable looking to use it for replication to store kvm images
03:36 kumar joined #gluster
03:45 itisravi joined #gluster
03:47 gildub joined #gluster
03:54 rafi joined #gluster
04:04 atinmu joined #gluster
04:06 gnudna left #gluster
04:14 m0zes joined #gluster
04:14 RameshN joined #gluster
04:18 bala joined #gluster
04:20 Apeksha joined #gluster
04:22 ppai joined #gluster
04:24 punit__ * can anyone else help me here https://www.mail-archive.com/glust​er-users@gluster.org/msg19440.html
04:26 rjoseph joined #gluster
04:30 kanagaraj joined #gluster
04:35 schandra joined #gluster
04:37 punit__ kanagaraj, can you help me here https://www.mail-archive.com/glust​er-users@gluster.org/msg19440.html
04:38 soumya_ joined #gluster
04:39 anoopcs joined #gluster
04:40 kanagaraj punit__, sorry i dont have much knowledge on this. Somebody from gluster-dev can help you out.
04:41 jiffin joined #gluster
04:41 kanagaraj punit__, i could see some errors gluster-client log, but not sure what those mean
04:41 punit__ kanagaraj,yes...i also seen those errors and update to vijay for have a look
04:42 punit__ kanagaraj, as all of my VMs are down...and can not poweron because of this problem
04:43 punit__ kanagaraj,also can not remove any VM...the Vm removed but the disk still exist with the status as illigal..
04:43 kanagaraj punit__, also this errors could have led the vdsm to timeout
04:43 punit__ kanagaraj,yes
04:43 kanagaraj punit__, ok
04:44 Apeksha joined #gluster
04:44 punit__ kanagaraj,please help me if you have any idea about it...as all of my vm's are down from last 36 hours
04:44 kanagaraj punit__, are you seeing any issues if you try to mount the gluster volume?
04:45 Apeksha joined #gluster
04:45 punit__ kanagaraj,no...the glusterfs volume mounted successfully
04:45 kanagaraj punit__, ok, i don't know the internal details of it. i hope someone from gluster-dev can help you.
04:47 punit__ kanagaraj,it's ok...i will wait for there reply...would you mind to let me know the gluster-dev..so i can contact with them
04:48 kanagaraj punit__, ok
04:49 nbalacha joined #gluster
04:54 hagarth joined #gluster
04:58 n-st joined #gluster
05:00 ppp joined #gluster
05:05 anil joined #gluster
05:07 nbalacha joined #gluster
05:07 T3 joined #gluster
05:13 smohan joined #gluster
05:15 sputnik13 joined #gluster
05:15 kshlm joined #gluster
05:16 corretico joined #gluster
05:16 nshaikh joined #gluster
05:18 Manikandan joined #gluster
05:18 Manikandan_ joined #gluster
05:18 Manikandan_ left #gluster
05:18 mikedep333 joined #gluster
05:19 lalatenduM joined #gluster
05:19 meghanam joined #gluster
05:21 glusterbot News from newglusterbugs: [Bug 1203086] Dist-geo-rep: geo-rep create fails with slave not empty <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203086>
05:22 Bhaskarakiran joined #gluster
05:25 sputnik13 joined #gluster
05:28 deepakcs joined #gluster
05:28 shubhendu joined #gluster
05:32 dusmant joined #gluster
05:34 Manikandan joined #gluster
05:34 Apeksha joined #gluster
05:42 gem joined #gluster
05:46 ramteid joined #gluster
05:51 glusterbot News from newglusterbugs: [Bug 1203089] Disperse volume: misleading unsuccessful message with heal and heal full <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203089>
05:52 gem joined #gluster
06:01 raghu joined #gluster
06:01 soumya joined #gluster
06:09 punit__ hagarth,i sent the log file...please let me know once you find any clue to resolve it...
06:12 lifeofguenter joined #gluster
06:15 wkf joined #gluster
06:16 lifeofgu_ joined #gluster
06:16 kdhananjay joined #gluster
06:17 karnan joined #gluster
06:20 soumya joined #gluster
06:24 overclk joined #gluster
06:29 free_amitc_ joined #gluster
06:32 Philambdo joined #gluster
06:32 corretico joined #gluster
06:33 raghu punit__: can you please provide the complete log file and if possible the log files of brick processes as well
06:35 punit__ raghu,yes i can provide...please let me know the preffred way to provide you the log file ??
06:35 atalur joined #gluster
06:36 raghu you can attach them to the mail you posted in gluster-users
06:45 punit__ raghu, log file uploaded here http://www.filedropper.com/logtar
06:45 raghu punit__: thanks. Let me take a look at it
06:45 punit__ ok thanks..
06:45 punit__ raghu, ok thanks ..
06:47 rjoseph joined #gluster
06:49 maveric_amitc_ joined #gluster
06:53 dusmant joined #gluster
06:53 raghu punit__: can you also attach the complete client log file?
06:54 kdhananjay joined #gluster
06:54 atalur joined #gluster
06:55 punit__ raghu,i have already attached the complete client log file in this tar file
06:55 atalur_ joined #gluster
06:56 dusmant joined #gluster
06:56 T3 joined #gluster
06:57 raghu when I tried to untar it I got the below error
06:57 raghu tar -xvf log.tar.gz
06:57 raghu bricks/
06:57 raghu bricks/bricks-6-vol1.log
06:57 raghu bricks/bricks-21-vol1.log
06:57 raghu bricks/bricks-19-vol1.log
06:57 raghu bricks/bricks-15-vol1.log
06:57 raghu bricks/bricks-10-vol1.log
06:57 raghu bricks/bricks-3-vol1.log
06:57 raghu bricks/bricks-24-vol1.log
06:57 raghu bricks/bricks-23-vol1.log
06:57 raghu bricks/bricks-18-vol1.log
06:57 raghu bricks/bricks-9-vol1.log
06:57 raghu bricks/bricks-8-vol1.log
06:57 punit__ raghu,
06:57 punit__ raghu,i will upload the client log file again with seperate
06:58 raghu punit__: sure. Thanks a lot :)
06:59 vimal joined #gluster
06:59 smohan joined #gluster
06:59 prg3 joined #gluster
07:04 punit__ raghu, log file uploaded here http://www.filedropper.com/rhev-dat​a-center-mnt-glustersd-1010014ds01
07:06 raghu punit__: ok. Will take a look
07:09 T0aD joined #gluster
07:10 mbukatov joined #gluster
07:14 raghu punit__: can you please give the o/p of this command? (gluster volume info <volume name>)
07:15 punit__ raghu : output here http://paste.ubuntu.com/10619422/
07:15 jiku joined #gluster
07:16 punit__ raghu, output here http://paste.ubuntu.com/10619422/
07:18 raghu also the o/p of "gluster volume status <volume name>".
07:20 punit__ raghu, please find gluster status from here http://paste.ubuntu.com/10619448/
07:23 raghu punit__: was there any network problem in the cluster?
07:24 punit__ raghu, no...i am using the bonding on storage network with 10G...so should be no problem there
07:27 rjoseph joined #gluster
07:27 raghu hmm.  Is glusterd up and running?
07:28 punit__ Raghu,
07:29 punit__ raghu, yes gluster is up and running
07:29 kovshenin joined #gluster
07:30 punit__ raghu, glusterd status http://paste.ubuntu.com/10619486/
07:33 punit__ raghu, gluster pool list and network http://paste.ubuntu.com/10619501/
07:34 nangthang joined #gluster
07:39 jtux joined #gluster
07:41 anrao joined #gluster
07:41 dusmant joined #gluster
07:47 corretico joined #gluster
07:52 SOLDIERz__ joined #gluster
07:54 raghu can you please increase the ping-timeout to 100 seconds and check if it helps? you can do it by this command "gluster volume set <volume name> network.ping-timeout 100"
07:54 raghu punit__: can you please increase the ping-timeout to 100 seconds and
07:54 raghu check if it helps? you can do it by this command "gluster
07:54 raghu volume set <volume name> network.ping-timeout 100"
07:55 PaulCuzner joined #gluster
08:00 SOLDIERz____ joined #gluster
08:08 harish_ joined #gluster
08:09 zerick joined #gluster
08:10 T0aD joined #gluster
08:12 T3 joined #gluster
08:13 topshare joined #gluster
08:15 ashiq joined #gluster
08:21 hgowtham joined #gluster
08:21 punit__ raghu,i have done the modification...now i will check again
08:22 punit__ raghu,need to restart the glusterd service ??
08:25 zerick joined #gluster
08:29 punit__ raghu, i have restarted the glusterd service but still the same error...can not poweron vm and timeout...find the logs here http://paste.ubuntu.com/10619650/
08:34 [Enrico] joined #gluster
08:35 topshare joined #gluster
08:37 _polto_ joined #gluster
08:38 hagarth joined #gluster
08:44 kaushal_ joined #gluster
08:47 topshare_ joined #gluster
08:49 punit__ raghu, did you find anything abnormal ??
08:49 shubhendu joined #gluster
08:49 dusmant joined #gluster
08:51 glusterbot News from newglusterbugs: [Bug 1203135] geo-replication spuriously ignores FQDN <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203135>
08:52 raghu punit__: For some reasons, the connection between clients and the bricks are being disconnected. Can you give the client log file again? Want to see the log messages after ping-timeout is increased
08:57 fsimonce joined #gluster
08:58 punit__ raghu,please find the latest client logs here http://www.filedropper.com/rhevtar
08:59 Slashman joined #gluster
08:59 lalatenduM joined #gluster
09:04 atinmu joined #gluster
09:04 topshare joined #gluster
09:07 liquidat joined #gluster
09:08 jflf joined #gluster
09:08 ppai joined #gluster
09:10 corretico joined #gluster
09:12 topshare joined #gluster
09:15 sankarshan_ joined #gluster
09:18 raghu punit__: Unfortunately downloading the log from the link you provided is failing everytime :(
09:19 anil joined #gluster
09:19 raghu This is the error firefox gives "Failed - filedropper.com - 02:47 PM"
09:20 punit__ raghu,i will check again wait
09:21 raghu punit__: sure.
09:21 tanuck joined #gluster
09:25 topshare_ joined #gluster
09:26 punit__ raghu,please check again with this link http://www.filedropper.com/rhev-data​-center-mnt-glustersd-10100143ads01
09:29 verdurin left #gluster
09:29 DV joined #gluster
09:30 meghanam joined #gluster
09:30 _NiC joined #gluster
09:30 atinmu joined #gluster
09:31 ashiq joined #gluster
09:34 SOLDIERz_____ joined #gluster
09:37 punit__ raghu,are you able to download the log file now ??
09:37 Guest45380 joined #gluster
09:37 raghu punit__: Yeah, but the log file seem to be truncated. When I unzipped it, I got the below warning
09:37 raghu inflating: rhev-data-center-mnt-gluste​rSD-10.10.0.14%3A_ds01.log
09:37 raghu rhev-data-center-mnt-gluste​rSD-10.10.0.14%3A_ds01.log:  write error (disk full?).  Continue? (y/n/^C)
09:38 kaushal_ joined #gluster
09:38 raghu when I gave yes, it said this:  "bad CRC 31d05c53  (should be 139d2377)"
09:39 dusmant joined #gluster
09:39 punit__ raghu,should i give you in the RAR format ??
09:39 bala1 joined #gluster
09:39 raghu The log file is generated, but it seem to be not complete.  This is the last line (incomplete) of the log file "2015-03-18 03:12:48.400954] I [afr-common.c:3723:afr_local_init] 0-ds01-replicate-26: no"
09:40 raghu punit__: yeah. Lets try that as well
09:40 punit__ raghu, please try to download this one http://www.filedropper.com/rhev-data-​center-mnt-glustersd-10100143ads01_1
09:43 blubberdi joined #gluster
09:45 _polto_ I try to delete a directory, it say node is not connected, I retry, it say permission denied.
09:45 _polto_ look like I broke my glusterfs again.
09:46 rjoseph joined #gluster
09:47 lalatenduM_ joined #gluster
09:48 Manikandan joined #gluster
09:50 atinmu joined #gluster
09:51 overclk joined #gluster
09:53 punit__ raghu, can work or not ??
09:53 raghu punit__: rar file is ok. Thanks. I am looking at the logs.
09:54 punit__ raghu, thanks
09:54 raghu punit__: What is the vdsm timeout value?
09:56 punit__ raghu, vdsm timeout value ...how i can find out this value...
09:57 punit__ raghu, it should be 180s http://www.ovirt.org/Sla/ha-timeouts
09:58 sankarshan joined #gluster
09:59 topshare joined #gluster
10:00 T3 joined #gluster
10:05 Manikandan joined #gluster
10:09 gildub joined #gluster
10:10 raghu punit__: I have a meeting now. Will get back later
10:12 kaushal_ joined #gluster
10:12 _polto_ joined #gluster
10:13 punit__ raghu, it's Ok...ping me when you will come back
10:14 [Enrico] joined #gluster
10:15 badone joined #gluster
10:16 corretico joined #gluster
10:17 ppai joined #gluster
10:22 topshare joined #gluster
10:26 raghu joined #gluster
10:27 atinmu joined #gluster
10:30 lifeofguenter joined #gluster
10:36 deniszh joined #gluster
10:39 harish_ joined #gluster
10:48 Leildin joined #gluster
10:50 bene2 joined #gluster
10:53 smohan joined #gluster
10:56 bennyturns joined #gluster
10:57 atalur_ joined #gluster
11:01 T3 joined #gluster
11:02 sputnik13 joined #gluster
11:02 karnan joined #gluster
11:03 ira joined #gluster
11:03 firemanxbr joined #gluster
11:06 atinmu joined #gluster
11:13 ppai joined #gluster
11:15 karnan joined #gluster
11:17 anil joined #gluster
11:17 overclk joined #gluster
11:22 glusterbot News from newglusterbugs: [Bug 1203185] Detached node list stale snaps <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203185>
11:23 rafi1 joined #gluster
11:26 hagarth joined #gluster
11:26 Guest45380 joined #gluster
11:28 davidbitton joined #gluster
11:32 atinmu joined #gluster
11:33 nbalacha joined #gluster
11:35 rafi joined #gluster
11:36 atalur_ joined #gluster
11:37 kkeithley_ Gluster Community Meeting in 25 minutes in #gluster-meeting
11:40 hgowtham joined #gluster
11:43 LebedevRI joined #gluster
11:48 overclk joined #gluster
11:48 Pupeno joined #gluster
11:52 glusterbot News from resolvedglusterbugs: [Bug 1200453] Permission denied for user with many secondary groups <https://bugzilla.redhat.co​m/show_bug.cgi?id=1200453>
11:54 nishanth joined #gluster
11:58 kkeithley_ Gluster Community Meeting starting now in #gluster-meeting
11:59 anoopcs joined #gluster
12:02 T3 joined #gluster
12:03 ildefonso joined #gluster
12:06 jdarcy joined #gluster
12:10 atinmu joined #gluster
12:11 overclk joined #gluster
12:14 Guest45380 joined #gluster
12:16 rjoseph joined #gluster
12:17 Manikandan joined #gluster
12:18 deniszh joined #gluster
12:26 punit__ raghu,pls let me know once you are avaiable
12:28 Apeksha joined #gluster
12:30 meghanam joined #gluster
12:31 NuxRo guys, what would be a good way to copy some files between 2 volumes? any particular flags I should be using with rsync etc?
12:35 bene2 joined #gluster
12:40 jmarley joined #gluster
12:43 SOLDIERz_____ joined #gluster
12:45 _polto_ joined #gluster
12:52 shubhendu joined #gluster
12:53 p0licy joined #gluster
12:58 hagarth joined #gluster
13:00 sputnik13 joined #gluster
13:11 dusmant joined #gluster
13:14 kripper joined #gluster
13:14 julim joined #gluster
13:14 ktosiek joined #gluster
13:14 diegows joined #gluster
13:15 dgandhi joined #gluster
13:15 ktosiek semiosis: Is the glusterfs-3.4 for saucy PPA gone for good?
13:16 dgandhi joined #gluster
13:17 jmarley joined #gluster
13:17 dgandhi joined #gluster
13:18 dgandhi joined #gluster
13:19 dgandhi joined #gluster
13:19 smohan joined #gluster
13:21 _Bryan_ joined #gluster
13:23 atalur_ joined #gluster
13:23 kripper left #gluster
13:28 soumya joined #gluster
13:31 hamiller joined #gluster
13:33 georgeh-LT2 joined #gluster
13:37 theron joined #gluster
13:38 kshlm joined #gluster
13:39 T3 joined #gluster
13:41 topshare joined #gluster
13:48 Apeksha joined #gluster
13:54 jflf If anyone has a working Samba export of a Gluster volume with gfapi, could you please tell me if you have installed the glusterfs-server packaged on the node doing the Samba export, and whether that node is part of the trusted pool?
13:56 jflf Currently on my export node I only have the glusterfs-fuse, samba-vfs-glusterfs packages and their dependencies, but I get "Transport endpoint is not connected" errors
13:56 jflf And the logs indicate that there is an attempt at connection to the localhost: 0-gfapi: connection to 127.0.0.1:24007 failed (Connection refused)
13:57 jflf Bits of the smb.conf related to gluster:
13:57 jflf vfs objects = glusterfs
13:57 jflf glusterfs:loglevel = 7
13:57 jflf glusterfs:logfile = /var/log/samba/glusterfs-live.log
13:57 jflf glusterfs:volume = live
13:58 hamiller left #gluster
13:59 jmarley joined #gluster
14:05 nbalacha joined #gluster
14:08 plarsen joined #gluster
14:11 dgandhi joined #gluster
14:12 dgandhi joined #gluster
14:14 dgandhi joined #gluster
14:14 dgandhi joined #gluster
14:15 kdhananjay joined #gluster
14:15 nishanth joined #gluster
14:20 bala1 joined #gluster
14:20 Bhaskarakiran joined #gluster
14:21 wushudoin| joined #gluster
14:23 glusterbot News from newglusterbugs: [Bug 1203293] Dist-geo-rep: geo-rep goes to faulty trying to sync .trashcan directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203293>
14:27 rafi joined #gluster
14:27 lifeofgu_ joined #gluster
14:29 lifeofg__ joined #gluster
14:29 atinmu joined #gluster
14:30 luis_silva joined #gluster
14:32 roost joined #gluster
14:33 SOLDIERz______ joined #gluster
14:39 nbalacha joined #gluster
14:45 harish_ joined #gluster
14:48 Apeksha joined #gluster
14:53 sputnik13 joined #gluster
14:57 plarsen joined #gluster
15:00 kovsheni_ joined #gluster
15:04 sputnik13 joined #gluster
15:05 corretico joined #gluster
15:08 lifeofguenter joined #gluster
15:17 dusmant joined #gluster
15:19 pcaruana joined #gluster
15:21 hamiller joined #gluster
15:21 pkoro joined #gluster
15:24 semiosis ktosiek: yes it is.  please switch to the new ,,(ppa) s
15:24 glusterbot ktosiek: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
15:26 semiosis ktosiek: furthermore, saucy is unsupported, by ubuntu and by me, so you really ought to upgrade to trusty
15:26 ktosiek they don't seem to have saucy support, but I can rebuild for saucy myself. Thank you for confirming!
15:26 Guest45380 joined #gluster
15:26 semiosis launchpad will not build for unsupported releases
15:27 ktosiek I know :-<
15:27 ktosiek I have to move all those machines to trusty anyway...
15:31 overclk joined #gluster
15:31 atinmu joined #gluster
15:40 topshare joined #gluster
15:41 julim joined #gluster
15:45 coredump joined #gluster
15:46 jflf Answer to myself, for posterity: yes the Samba gfapi backend requires the glusterfs-server package installed on the client.
15:50 gnudna joined #gluster
15:55 kanagaraj joined #gluster
15:56 lifeofguenter joined #gluster
15:57 Hanefr joined #gluster
15:58 Hanefr Would anybody be willing to answer a question for me about global namespacing?  I’m not sure I understand this and haven’t found any specific information on it.
16:00 deepakcs joined #gluster
16:01 Hanefr How does namespace work?  My understanding is I connect to gluster.domain.com and that fqdn is shared among gserver1, gserver2, gserver3, gserver4.  Is this not correct?
16:11 Manikandan joined #gluster
16:11 Hanefr Beuller?
16:13 kshlm joined #gluster
16:18 doo joined #gluster
16:21 anil joined #gluster
16:23 JoeJulian Hanefr: patience, grasshopper. ;) The file names, across the entire distributed cluster, are combined into one. That's the global namespace that's referred to.
16:24 Hanefr Thanks Joe. Connecting my glusterfs client to gserver1:/glustershare connects it to all the servers in that cluster?  No special DNS setup?
16:25 JoeJulian When you create a volume, you assign bricks based on a series of hostnames. The clients will need to be able to find those hostnames.
16:26 Hanefr Right.  peer probe gserver2, gserver3, gserver4 then add them into a volume with gluster volume create replicate 2 gserver1:/share gserver2:/share
16:27 Hanefr And if, say gserver2, looses connectivity event though I used gserver1 for the initial connection it’ll still stay connected?
16:27 JoeJulian right
16:27 JoeJulian @hostnames
16:27 glusterbot JoeJulian: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
16:27 JoeJulian no...
16:27 JoeJulian @mount server
16:27 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
16:27 JoeJulian yeah, that's the factoid I was looking for
16:29 Hanefr Thanks.  Is there a way to set it up so that there is failover?
16:30 JoeJulian mount failover?
16:30 JoeJulian @rrdns
16:30 glusterbot JoeJulian: You can use rrdns to allow failover for mounting your volume. See Joe's tutorial: http://goo.gl/ktI6p
16:31 atinmu joined #gluster
16:31 Hanefr That’s what I’m looking for!  Thank you sir!
16:31 JoeJulian You're welcome
16:35 hchiramm_ joined #gluster
16:36 gem joined #gluster
16:37 nangthang joined #gluster
16:39 smohan joined #gluster
16:40 pjschmitt joined #gluster
16:40 kanagaraj joined #gluster
16:40 pjschmitt so how much cpu is usually used when it's done with infiniband?
16:40 pjschmitt is it enough that I should separate my compute and storage nodes?
16:42 T3 joined #gluster
16:55 kumar joined #gluster
16:58 dusmant joined #gluster
16:59 JoeJulian I suspect that's a hardware dependent question. I know that using rdma you have fewer context switches so it should be less load than without it.
17:00 JoeJulian If you do share compute and storage, I would use cgroups to guarantee memory.
17:05 p0licy JoeJulian: thanks for the help yesterday. little bit of a noob when it comes to glusterfs
17:06 virusuy joined #gluster
17:07 squizzi joined #gluster
17:08 coredump joined #gluster
17:14 SOLDIERz______ joined #gluster
17:14 JoeJulian You're welcome. I'm happy to help.
17:15 ekuric joined #gluster
17:15 p0licy JoeJulian: is there a way to configure what user geo replication uses?
17:16 p0licy do I need to configure /var/lib/glusterd/geo-repli​cation/gsyncd_template.conf
17:17 squizzi joined #gluster
17:21 plarsen joined #gluster
17:27 JoeJulian I'm not sure and can't research it at the moment. Big customer coming online this week and things are a bit hectic at $dayjob right now.
17:34 Rapture joined #gluster
17:38 xavih joined #gluster
17:38 malevolent joined #gluster
17:39 troyready joined #gluster
17:44 Gill joined #gluster
17:46 luis_silva joined #gluster
17:54 lifeofguenter joined #gluster
18:02 jmarley joined #gluster
18:03 soumya joined #gluster
18:04 roost joined #gluster
18:06 nangthang joined #gluster
18:08 cornus_ammonis joined #gluster
18:19 gildub joined #gluster
18:24 B21956 joined #gluster
18:27 Philambdo1 joined #gluster
18:30 lalatenduM joined #gluster
18:34 lifeofguenter joined #gluster
18:39 hagarth joined #gluster
18:39 PeterA joined #gluster
18:39 PeterA i have a brick crashing issue when trying to deleting some files from a non-replicated volume
18:39 PeterA over nfs
18:40 PeterA how could that be possible for a nfs client try to rm a folder and crash the brick?!
18:40 JoeJulian fpaste the crash log
18:41 PeterA u mean the brick log?
18:41 JoeJulian yes, not the whole thing though. Just from a half dozen lines or so before the crash report.
18:43 SOLDIERz______ joined #gluster
18:44 PeterA http://fpaste.org/199708/14267042/
18:47 Philambdo1 joined #gluster
18:48 PeterA any clue
18:48 PeterA it's super repeatable
18:48 cornfed78 joined #gluster
18:50 julim joined #gluster
18:50 PaulCuzner joined #gluster
18:52 ekuric left #gluster
18:53 plarsen joined #gluster
18:54 JoeJulian looks like it has to do with quota and might be related to bug 1122120
18:54 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1122120 urgent, unspecified, ---, bugs, MODIFIED , Bricks crashing after disable and re-enabled quota on a volume
18:54 Philambdo1 joined #gluster
18:55 PeterA err…i created that bug but i didn't reenable or disbale any quota...
18:55 PeterA recently...
18:57 PeterA but the brick only crash when we try to remove a specific folder....
18:58 theron joined #gluster
18:59 corretico joined #gluster
19:00 punit_ joined #gluster
19:00 JoeJulian well the crash seems to be coming from a similar code path and it claims to be fixed in 3.5.3.
19:02 PeterA oh....
19:02 tanuck joined #gluster
19:03 PeterA when i look into the bricks and seems like the files exist on brick does not show on client
19:03 PeterA and having IO error then try to remove
19:03 PeterA http://fpaste.org/199727/70542214/
19:04 PeterA whenever try to rm that folder, the brick crashed
19:04 PeterA same brick crashed
19:05 p0licy does anyone know how to fix "Gluster version mismatch between master and slave" when setting up geo replication
19:05 JoeJulian PeterA: I guess, other than that, I would just have to recommend you file a bug report.
19:05 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
19:06 PeterA ok .....
19:08 JoeJulian p0licy: which versions are you trying to mix?
19:09 diegows joined #gluster
19:11 p0licy both servers are version 3.6.2
19:11 p0licy running gluster volume geo-replication <master volume name> <slave hostname>::<slave volume> create
19:12 p0licy and I get is not a valid slave volume.
19:13 PeterA i was able to remove that folder over a glusterfs mount…
19:15 JoeJulian odd
19:16 JoeJulian p0licy: That's the wrong syntax for the remote volume according to https://github.com/gluster/glusterf​s/blob/master/doc/admin-guide/en-US​/markdown/admin_geo-replication.md
19:17 JoeJulian though I do see a :: form in the notes. I wonder if that's a typo.
19:18 p0licy i am suppose to run this command on the master correct?
19:18 p0licy so maybe Im doing it wrong on the slave I have a volume(gv0) that is being replicated by two nodes. Then I want to set up a master volume(gv0)  to replicate to the slave volume
19:18 JoeJulian correct
19:21 p0licy both volume types are replicate
19:26 p0licy am I suppose to run gluster create
19:40 PeterA when i tried to remove another folder it crashed a brick too
19:40 PeterA this time over glusterfs
19:40 PeterA http://fpaste.org/199750/14267076/
19:42 lpabon joined #gluster
19:43 p0licy is anyony else running 3.6.2? or should I downgrade to 3.5
19:44 gnudna i was running 3.6.2 on centos 7
19:44 gnudna using packages from epel
19:44 p0licy are you getting this error over and over  readv on /var/run/b9145ed40b6744f94d98db7edbd9512b.socket failed (Invalid argument)
19:45 diegows joined #gluster
19:46 gnudna which log file are you looking at?
19:47 p0licy etc-glusterfs-glusterd.vol.log
19:47 p0licy I also cant get georeplication working. been at it for 3 hours now
19:49 gnudna not seeing those exact errors
19:49 gnudna seeing others but at this point that might be me using selinux
19:50 diegows_ joined #gluster
19:50 gnudna so to rephrase yes i am seeing similar Warnings
19:54 glusterbot News from newglusterbugs: [Bug 1203433] Brick/glusterfsd crash when tried to rm a folder with IO error <https://bugzilla.redhat.co​m/show_bug.cgi?id=1203433>
19:55 p0licy has anyone had this error when setting up geo replication
19:55 p0licy Gluster version mismatch between master and slave.
20:00 ecc joined #gluster
20:00 ecc hello all
20:01 ecc I'm new to gluster and need some help on the client side
20:02 ecc I've set up a test cluster with two nodes with replication
20:02 SOLDIERz______ joined #gluster
20:02 ecc running sudo gluster volume status looks good on them
20:03 ecc installed the glusterfs-client on another machine, and gave it full access to all ports/TCP
20:04 ecc added test00.domain.com:/gv0 /gluster/test/ glusterfs defaults,_netdev,log-level=WARNI​NG,log-file=/var/log/gluster.log 0 0
20:04 ecc to fstab
20:04 ecc but mounting seems to hang
20:05 ecc Is there any way I can test connectivity from my client machine?
20:08 deniszh joined #gluster
20:09 deniszh left #gluster
20:12 JoeJulian ecc: fpaste.org your log file
20:13 ecc I'd love to but /var/log/glusterfs is empty :(
20:14 JoeJulian right, because you told it to go to /var/log/gluster.log
20:15 ecc oh, oops
20:15 ecc :P
20:15 JoeJulian hehe
20:16 ecc Ok, I think I see the problem
20:16 ecc failed to connect to remote host errors ;)
20:17 gnudna firewall
20:17 ecc Fixed the hostname to an internal one and it looks to be mounted :)
20:18 JoeJulian cool
20:18 ecc hurr durr my bad
20:18 ecc one more question
20:18 JoeJulian ecc:  now remember, the client has to be able to connect to all the bricks. Make sure all their hostnames can be resolved.
20:19 ecc they should be able to, I think
20:19 ecc *looks at log*
20:19 ecc yep
20:20 ecc here's a question
20:20 ecc currently it connects to test00
20:21 ecc but Is there any way to transparently load balance it so that if test00 goes down hard I can still connect to say, test01 dynamically?
20:22 ecc (they are two nodes set up with a replication)
20:23 gnudna ecc when using the main hostname it seems to failover gracefully when using glusterfs fuse module anyways
20:23 ecc cool
20:23 gnudna as far as i can tell it only works that way when using glusterfs fuse module
20:24 gnudna im sure someone here can chime in with a better answer
20:24 ecc Thanks gnudna
20:24 gnudna seriously wait for a better answer i just started working with glusterfs last week
20:24 ecc haha I will
20:24 gnudna ;)
20:25 ecc and Thanks JoeJulian
20:27 ecc Also, in case anyone want to tell me/yell at me that I'm doing stuff wrong - my plan is to run fatcache (https://github.com/twitter/fatcache) stuff on a glusterFS running on EBS SSDs
20:27 DV joined #gluster
20:29 tanuck joined #gluster
20:31 _polto_ trying to remove a folder I get - Transport endpoint is not connected
20:32 _polto_ gluster is available, everything else seems to work, in some folders I can write thousands of files and some are non-writable and unremovable.
20:33 gnudna _polto_ is there some kinda heal taking place?
20:34 _polto_ gnudna: I tried healer.py on those files, it says "no accusations"
20:34 _polto_ No heal needed for  ...
20:34 gnudna gluster volume heal your-volume-name info
20:35 gnudna i am new to gluster but i thought when it did a heal it set files to read only
20:35 _polto_ gnudna: Number of entries: 0 for each disk
20:36 _polto_ and just recently I got files and folders in a strange state. ls -als indicate :
20:36 _polto_ ls: cannot access img/14235759: No data available
20:36 _polto_ ? ?????????? ? ?     ?       ?            ? 14235759
20:37 _polto_ no filetype, permission, date, ...
20:38 gnudna _polto_ no idea why that would happen does this happen on the mount only?
20:38 gnudna what happens when you cd into the brick itself
20:38 gnudna can you see the file in question?
20:41 _polto_ gnudna: yes they are.
20:42 _polto_ just one of the folders have d--------- permissions, but the others are OK
20:42 _polto_ it's the third time I have such problems on gluster.
20:42 glusterbot _polto_: d-------'s karma is now -2
20:42 _polto_ I can remove those files and resplit them from RAW data.
20:43 _polto_ but it's 3 time now I have such case.
20:43 gnudna not sure what to do in your case
20:43 gnudna have not used gluster long enough
20:44 _polto_ maybe somebody else here would be able to help ?
20:44 _polto_ thanks gnudna
20:44 primusinterpares joined #gluster
20:44 gnudna very likely someone here has a answer
20:44 gnudna just hope they are around ;)
20:44 JoeJulian _polto_: Does "gluster volume status" look right to you? If not, share it via fpaste.org.
20:46 _polto_ JoeJulian: http://pastie.org/10036016
20:46 _polto_ yep, look OK
20:47 JoeJulian look for errors in your client log
20:47 JoeJulian what version is this?
20:49 _polto_ [2015-03-18 16:29:41.287822] E [client-rpc-fops.c:2680:client3_3_opendir_cbk] 0-data-client-35: remote operation failed: Permission denied. Path: /newstructure/rawdata/00-0E-64-08-1​C-D2/master/1423492625/segment/1423​575302/preview/debayer/img/14235775 (e587da53-8338-45a5-9ce8-0928688d4c87)
20:50 _polto_ glusterfs 3.6.2
20:52 _polto_ if I remove the files on bricks I can remove them on the gluster volume
20:52 _polto_ but I do not know if it's a good idea..
20:52 _polto_ JoeJulian: ^
20:52 gnudna not sure that is the way you want to go about it
20:53 JoeJulian _polto_: selinux on your servers?
20:53 _polto_ JoeJulian: nope
20:53 JoeJulian Then how can root be getting an EPERM on the bricks?
20:54 _polto_ standard ubuntu 14.04 with http://ppa.launchpad.net/gl​uster/glusterfs-3.6/ubuntu PPA
20:54 gnudna left #gluster
20:54 _polto_ EPERM ?
20:54 JoeJulian permission denied
20:55 _polto_ but how did it hapen ?
20:55 _polto_ I write all as the same user from two different servers.
20:55 JoeJulian glusterfsd runs as root
20:55 _polto_ certenly
20:56 JoeJulian So it shouldn't be denied access to anything.
20:56 JoeJulian Any other possibilities that ubuntu has for restricting access?
20:57 JoeJulian anything special about your bricks?
20:58 misc apparmor ?
20:58 JoeJulian That's the tool I couldn't think of.
20:58 _polto_ JoeJulian: during my tests I cd to a dir and made "for i in `seq 1 100`; do dd if=/dev/zero of=$i bs=1M count=3 ; done" only 64 files were created. the rest was permission denied. And if I retry same files can not be writen again.
21:00 _polto_ I did not desabled or changed any default settings about apparmor. but no any documentation tell that I should ...
21:01 JoeJulian that would imply that only a subset of your bricks are blocking glusterfsd's access. Statistically, that could be 2 or 4, depending on whether it's just one side or both sides of the replica.
21:03 JoeJulian 0-data-client-35 says that at least one of the permission errors originates from your last brick, 192.168.98.2:/export/sdr/brick
21:09 _polto_ JoeJulian:  ok, in my case I can chmod 777 -R /data
21:09 JoeJulian that doesn't make sense
21:09 _polto_ so why this permission problem ?
21:09 JoeJulian think root. root has permissions unless something else blocks it.
21:10 JoeJulian chmod isn't going to stop or enable root
21:10 _polto_ apparmor ?
21:10 JoeJulian ?
21:10 JoeJulian I don't use ubuntu
21:11 _polto_ I should not either
21:11 JoeJulian hehe
21:11 JoeJulian I always recommend using whatever you're most familiar with.
21:12 _polto_ Gentoo :) but I was not able to convince my collegues..
21:16 _polto_ JoeJulian: the strange thing: some other folder can be writen and I can write 200'000 files in those other folders on the same volume.
21:17 JoeJulian Odd
21:18 JoeJulian @targeted rebalance
21:19 JoeJulian @factoids search rebalance
21:19 glusterbot JoeJulian: No keys matched that query.
21:19 JoeJulian <sigh>
21:23 st__ joined #gluster
21:29 roost joined #gluster
21:30 p0licy JoeJulian: figured out my geo replication issues, needed to do a force when creating it. Need to sleep
21:31 JoeJulian awesome. sleep well
21:32 p0licy I am still getting alot of error messages inte logs
21:35 p0licy [glusterd-utils.c:7364:glust​erd_add_inode_size_to_dict] 0-management: xfs_info exited with non-zero exit status
21:35 p0licy over and over again
21:44 jobewan joined #gluster
21:52 jmarley joined #gluster
21:54 wushudoin| joined #gluster
22:09 Gill joined #gluster
22:11 Gill left #gluster
22:12 tzimisce joined #gluster
22:14 sputnik13 joined #gluster
22:17 davidbitton joined #gluster
22:18 badone joined #gluster
22:33 diegows_ joined #gluster
22:36 ckotil_ joined #gluster
22:41 daMaestro joined #gluster
22:41 _polto_ JoeJulian: wow... I have a script that create jpeg files... sudenly it stop with permission denied. touch_ing a file do not work as user. as root it does, if I chmod 777, it work agait as user. but if I remove that file , I can not recreate it as user. the directory have write permissions.
23:08 pjschmitt JoeJulian: that's a good point
23:08 pjschmitt JoeJulian: may be you can answer this: I saw this post, and it somewhat bothered me, did rdma support get pulled?
23:08 pjschmitt http://www.gluster.org/pipermail/glu​ster-users/2014-October/019016.html
23:10 pjschmitt JoeJulian: are there issues with SELinux?
23:21 sputnik13 joined #gluster
23:24 doo joined #gluster
23:25 fsimonce joined #gluster
23:26 sputnik13 joined #gluster
23:37 davidbitton joined #gluster
23:37 sputnik13 joined #gluster
23:43 theron joined #gluster
23:58 _zerick_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary