Camelia, the Perl 6 bug

IRC log for #gluster, 2013-10-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 JoeJulian Always works for me. :D
00:01 JoeJulian Real life use, had a couple hard drives fail in the last 4 years and replication has always worked perfectly at restoring them.
00:03 zeedon2 E [client-handshake.c:1741:client_query_portmap_cbk] 0-gv0-client-1: failed to get the port number for remote subvolume. Please
00:03 zeedon2 run 'gluster volume status' on server to see if brick process is running
00:04 JoeJulian zeedon2: 3.4.0?
00:05 zeedon2 3.4.1-ubuntu1~precise1
00:05 JoeJulian hmm, not what I was thinking then...
00:05 JoeJulian fpaste your log if you'd like me to take a look.
00:05 JoeJulian @paste
00:05 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
00:06 JoeJulian pastebinit for you ubuntu types...
00:07 JoeJulian ... and then I'm going to go start dinner and probably have a drink.
00:10 StarBeast joined #gluster
00:13 JoeJulian zeedon2: When was the most recent disconnection?
00:14 zeedon2 one moment
00:15 Guest34797 joined #gluster
00:20 zeedon2 [02-Oct-2013 07:06:48 UTC]
00:20 zeedon2 apologies for delay
00:26 JoeJulian No worries... It'll be a bit of a delay for me as well as I'm getting some stuff in the oven.
00:42 dmojoryder left #gluster
00:53 JoeJulian @later tell zeedon2 Well those logs were fairly useless as none of them were for the timespan surrounding 07:06 UTC. Best thing to do is check the logs at the time that you're having the issue and see what the last thing that happened was.
00:53 glusterbot JoeJulian: The operation succeeded.
01:04 rferreira joined #gluster
01:22 Cenbe joined #gluster
01:36 harish joined #gluster
01:53 toad joined #gluster
02:05 harish joined #gluster
02:21 satheesh1 joined #gluster
02:53 kshlm joined #gluster
02:55 rjoseph joined #gluster
03:10 DV joined #gluster
03:15 shubhendu joined #gluster
03:27 sgowda joined #gluster
03:30 davinder joined #gluster
03:44 itisravi joined #gluster
03:48 shylesh joined #gluster
03:55 kanagaraj joined #gluster
04:02 mohankumar joined #gluster
04:10 dusmant joined #gluster
04:11 ppai joined #gluster
04:26 edong23 joined #gluster
04:32 kPb_in joined #gluster
04:40 shruti joined #gluster
04:44 shubhendu joined #gluster
04:44 GabrieleV joined #gluster
04:59 bala joined #gluster
05:14 aravindavk joined #gluster
05:16 shyam joined #gluster
05:19 bulde joined #gluster
05:20 bala joined #gluster
05:21 shylesh joined #gluster
05:28 raghu joined #gluster
05:30 aik__ joined #gluster
05:33 aik__ hi! cannot get volume started
05:33 aik__ volume start: gv0: failed: Commit failed on localhost. Please check the log file for more details.
05:33 aik__ followed directions from http://www.gluster.org/community/d​ocumentation/index.php/QuickStart
05:33 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
05:34 roo9 aik__: just a random thought, but perhaps you could check the log file. it might have more details.
05:35 aik__ roo9: checked. cannot understand anything. I am trying to create a share, create a disk image on it and run qemu with it to test qemu's ability to work with gluster
05:35 aik__ roo9: [2013-10-03 05:23:17.169636] W [rpc-transport.c:175:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
05:35 aik__ [2013-10-03 05:23:17.173387] I [socket.c:3480:socket_init] 0-glusterfs: SSL support is NOT enabled
05:35 aik__ [2013-10-03 05:23:17.173420] I [socket.c:3495:socket_init] 0-glusterfs: using system polling thread
05:35 aik__ [2013-10-03 05:23:17.179887] I [cli-cmd-volume.c:1275:cli_check_gsync_present] 0-: geo-replication not installed
05:35 aik__ [2013-10-03 05:23:17.188707] I [cli-rpc-ops.c:1093:gf_cli_start_volume_cbk] 0-cli: Received resp to start volume
05:35 aik__ [2013-10-03 05:23:17.188767] I [input.c:36:cli_batch] 0-: Exiting with: -1
05:36 aik__ roo9: that's the log
05:36 micu1 joined #gluster
05:36 aik__ roo9: quickstart does not say anything about ssl so I assume it is not absolutely necessary, right?
05:41 aik__ roo9: it is fedora19 on powerpc64 if it matters
05:41 aik__ glusterfs 3.4.0 built on Aug  6 2013 04:14:21
05:45 anands joined #gluster
06:00 nshaikh joined #gluster
06:02 mohankumar joined #gluster
06:10 vshankar joined #gluster
06:11 lalatenduM joined #gluster
06:11 vshankar_ joined #gluster
06:11 kshlm aik__: check the glusterd logs.
06:12 vimal joined #gluster
06:12 aik__ kshlm: where?
06:13 kwevers joined #gluster
06:13 kshlm the same directory where you got the cli logs, it should be /var/log/glusterfs/etc-glusterfs-glusterd.log
06:13 aik__ kshlm: then the daemon has not started
06:14 aik__ kshlm: service glusterd status shows only glusterd
06:14 kshlm that's what it should show.
06:15 aik__ kshlm: quickstart says there must be 3 services
06:15 aik__ glusterd glusterfsd glusterfs
06:15 kPb_in joined #gluster
06:16 kshlm the others start up after you've created and started a volume
06:16 aik__ kshlm: is it a must to have 2 nodes? I am trying just one
06:16 kshlm nope. gluster runs fine on a single node
06:16 kshlm please check the glusterd log again.
06:17 aik__ [root@dyn232 aik]# ls -la /var/log/glusterfs/
06:17 aik__ total 20
06:17 aik__ drwxrwxrwx.  2 root root 4096 Oct  3 15:23 .
06:17 aik__ drwxr-xr-x. 19 root root 4096 Oct  3 03:21 ..
06:17 aik__ -rw-------.  1 root root 5493 Oct  3 16:02 cli.log
06:17 aik__ -rw-------.  1 root root 1896 Oct  3 16:01 .cmd_log_history
06:17 kshlm that should have some clue to why you are not able to start the volume.
06:17 aik__ kshlm: no log
06:17 kshlm that is strange
06:19 kshlm try restarting glusterd, see if that creates the log file.
06:20 aik__ kshlm: did it, twice :)
06:33 kshlm glusterd should create the log file automatilcally. What 'ls -l /proc/<glusterds-pid>/fd' give?
06:36 psharma joined #gluster
06:40 rastar joined #gluster
06:47 ricky-ticky joined #gluster
06:47 tjikkun_work joined #gluster
06:50 aik__ kshlm: glusterds is not running
06:51 aik__ [root@dyn232 ~]# ps ax | grep gluster
06:51 aik__ 6333 ?        Ssl    0:00 /usr/sbin/glusterd -p /run/glusterd.pid
06:51 aik__ 6372 pts/0    S+     0:00 grep --color=auto gluster
06:51 aik__ [root@dyn232 ~]# ls -l /proc/6333/fd
06:51 aik__ total 0
06:51 aik__ lr-x------. 1 root root 64 Oct  3 16:42 0 -> /dev/null
06:51 aik__ l-wx------. 1 root root 64 Oct  3 16:42 1 -> /dev/null
06:51 aik__ l-wx------. 1 root root 64 Oct  3 16:42 2 -> /dev/null
06:51 aik__ lrwx------. 1 root root 64 Oct  3 16:42 3 -> anon_inode:[eventpoll]
06:51 aik__ l-wx------. 1 root root 64 Oct  3 16:42 4 -> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
06:51 aik__ lrwx------. 1 root root 64 Oct  3 16:42 5 -> /run/glusterd.pid
06:51 aik__ l-wx------. 1 root root 64 Oct  3 16:42 8 -> /var/log/glusterfs/.cmd_log_history
06:51 aik__ lrwx------. 1 root root 64 Oct  3 16:42 9 -> socket:[315776]
06:51 ekuric joined #gluster
06:53 kshlm so glusterd has opened the log file.
06:54 ngoswami joined #gluster
06:55 aik__ kshlm: yep, I enabled it and restarted the service
06:55 aik__ kshlm: and did many other things :)
06:56 aik__ kshlm: now I see this: [root@dyn232 ~]# gluster volume delete gv0
06:56 aik__ Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
06:56 aik__ volume delete: gv0: failed: Volume gv0 does not exist
06:56 aik__ [root@dyn232 ~]#
06:56 aik__ [root@dyn232 ~]# gluster volume create gv0 vpl2:/home/aik/glusterroot/loop0
06:56 aik__ volume create: gv0: failed: /home/aik/glusterroot/loop0 or a prefix of it is already part of a volume
06:56 glusterbot aik__: To clear that error, follow the instructions at http://goo.gl/YUzrh or see this bug http://goo.gl/YZi8Y
06:57 Maxence joined #gluster
07:03 shubhendu joined #gluster
07:13 rgustafs joined #gluster
07:15 keytab joined #gluster
07:18 xavih joined #gluster
07:28 masterzen joined #gluster
07:29 ppai joined #gluster
07:30 ctria joined #gluster
07:46 aik__ kshlm: selinux was the problem, had to disable it to get things working
08:17 kopke joined #gluster
08:21 ppai joined #gluster
08:24 StarBeast joined #gluster
08:26 ninkotech joined #gluster
08:28 ninkotech_ joined #gluster
08:30 CheRi joined #gluster
08:35 manik joined #gluster
08:49 rgustafs joined #gluster
08:58 ngoswami joined #gluster
09:07 glusterbot New news from newglusterbugs: [Bug 1010834] No error is reported when files are in extended-attribute split-brain state. <http://goo.gl/HlfVxX>
09:08 saurabh joined #gluster
09:18 psharma joined #gluster
09:20 aravindavk joined #gluster
09:36 nasso joined #gluster
09:38 rotbeard joined #gluster
09:39 davinder joined #gluster
09:49 the-me joined #gluster
09:50 lalatenduM joined #gluster
09:50 jcsp joined #gluster
10:09 Oneiroi joined #gluster
10:09 ngoswami joined #gluster
10:17 ricky-ticky joined #gluster
10:23 davinder joined #gluster
10:24 torbjorn___ joined #gluster
10:27 Maxence_ joined #gluster
10:28 pkoro joined #gluster
10:37 rgustafs joined #gluster
10:43 harish joined #gluster
10:45 kkeithley1 joined #gluster
10:45 kkeithley1 left #gluster
10:46 kkeithley1 joined #gluster
10:50 mohankumar joined #gluster
10:50 torbjorn___ joined #gluster
10:52 hybrid512 joined #gluster
10:54 satheesh1 joined #gluster
11:05 RicardoSSP joined #gluster
11:05 RicardoSSP joined #gluster
11:18 satheesh1 joined #gluster
11:21 CheRi joined #gluster
11:26 aik__ joined #gluster
11:29 ppai joined #gluster
11:32 sac`away joined #gluster
11:34 manik joined #gluster
11:38 nasso joined #gluster
11:40 GabrieleV joined #gluster
11:40 SteveCooling joined #gluster
11:45 blablablashow joined #gluster
11:45 polfilm joined #gluster
11:46 blablablashow hi,guys!
11:47 saurabh joined #gluster
11:47 B21956 joined #gluster
11:47 rgustafs joined #gluster
11:47 blablablashow Can anybody tell me, does Glusterfs 3.3.1  NFS mount work with CTDB?
11:49 blablablashow it's very important for me. i can't find fresh news about this..
11:50 blablablashow anybody hear??
12:00 blablablashow can anybody answer the question?
12:01 aik__ left #gluster
12:01 shyam left #gluster
12:01 mohankumar joined #gluster
12:02 blablablashow hi
12:02 glusterbot blablablashow: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:02 satheesh2 joined #gluster
12:04 blablablashow Can anybody tell me, does Glusterfs 3.3.1  NFS mount work with CTDB?
12:06 P0w3r3d joined #gluster
12:07 itisravi joined #gluster
12:09 psharma joined #gluster
12:10 kkeithley_ Don't really know. If I had to guess, I'd say it doesn't do anything with CTDB.
12:11 kkeithley_ You might ask again a little later in the day when more people are on-line.
12:16 KORG joined #gluster
12:17 KORG Guys, please advice documentation on replacing node in replica?
12:20 kkeithley_ well, I take that back, there's this article about CTDB in the docs on the Gluster web site http://www.gluster.org/communit​y/documentation/index.php/CTDB
12:20 glusterbot <http://goo.gl/Yt3pOb> (at www.gluster.org)
12:22 KORG I found such documentation: http://joejulian.name/blog/replacin​g-a-glusterfs-server-best-practice/
12:22 glusterbot <http://goo.gl/pwTHN> (at joejulian.name)
12:22 kkeithley_ JoeJulian gives good advice
12:27 blablablashow About CTDB . I find two docs, from RedHat. This say, that NFS doesn't work https://access.redhat.com/site/documenta​tion/en-US/Red_Hat_Storage_Software_Appl​iance/3.2/html-single/User_Guide/#sect-A​dministration_Guide-GlusterFS_Client-NFS , but other articles don't sign about this.
12:28 glusterbot <http://goo.gl/ZQBZAS> (at access.redhat.com)
12:29 ngoswami joined #gluster
12:32 bulde joined #gluster
12:32 edward1 joined #gluster
12:37 dusmant joined #gluster
12:37 KORG Maby you can advice official documentation about replacing node?
12:49 satheesh joined #gluster
13:03 ninkotech_ joined #gluster
13:03 ninkotech joined #gluster
13:04 bennyturns joined #gluster
13:04 chirino joined #gluster
13:07 anands joined #gluster
13:09 ndk joined #gluster
13:11 bala joined #gluster
13:14 anands joined #gluster
13:32 sprachgenerator joined #gluster
13:42 keytab Where do i download gluster Server manager?
13:42 squizzi left #gluster
13:45 jcsp joined #gluster
13:47 keytab *Gluster Storage Platform
13:50 keytab left #gluster
13:59 kkeithley_ @yum
13:59 glusterbot kkeithley_: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://goo.gl/42wTd5 The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
13:59 kkeithley_ @ppa
13:59 glusterbot kkeithley_: The official glusterfs packages for Ubuntu are available here: 3.3 stable: http://goo.gl/7ZTNY -- 3.4 stable: http://goo.gl/u33hy
13:59 ricky-ticky joined #gluster
13:59 kkeithley_ keytab: ^^^
14:02 kaptk2 joined #gluster
14:06 ndk` joined #gluster
14:14 jbautista joined #gluster
14:33 failshell joined #gluster
14:33 davinder2 joined #gluster
14:46 polfilm joined #gluster
14:52 wushudoin joined #gluster
14:59 ndk`` joined #gluster
15:00 aliguori joined #gluster
15:01 rjoseph joined #gluster
15:09 LoudNoises joined #gluster
15:17 ninkotech joined #gluster
15:24 dtyarnell joined #gluster
15:37 RicardoSSP joined #gluster
15:37 RicardoSSP joined #gluster
15:38 dtyarnell joined #gluster
15:48 B21956 joined #gluster
15:58 keytab joined #gluster
16:04 Jayunit100 anyone see interimittent gluster peer probe failures?
16:04 Jayunit100 like, do it once, fail.  do it again, fail.  do it third time, works.
16:05 zaitcev joined #gluster
16:06 raar joined #gluster
16:18 dtyarnell_ joined #gluster
16:23 kanagaraj joined #gluster
16:30 Mo__ joined #gluster
16:31 dalekurt joined #gluster
16:40 anands joined #gluster
16:43 TuxedoMan joined #gluster
16:43 TuxedoMan left #gluster
16:48 vpshastry joined #gluster
16:55 dalekurt joined #gluster
16:55 shylesh joined #gluster
16:57 kanagaraj joined #gluster
17:01 dalekurt_ joined #gluster
17:03 dalekurt joined #gluster
17:07 PatNarciso happy Thursday everyone.
17:08 dalekurt joined #gluster
17:09 vpshastry joined #gluster
17:11 PatNarciso glusterbot, hows your day going?
17:15 * PatNarciso hears a pin drop.
17:16 cfeller if a pin drops in #gluster, and everyone is away...
17:16 dalekurt joined #gluster
17:18 dalekurt joined #gluster
17:23 kaptk2 why doesn't glusterd 3.4 start on boot with a F19 machine?
17:35 failshell just got a Samba/NFS cluster on gluster with CTDB
17:35 failshell pretty slick
17:41 Technicool joined #gluster
17:42 * glusterbot is rocking
17:45 ninkotech_ joined #gluster
17:45 ninkotech joined #gluster
17:52 vpshastry joined #gluster
17:54 kkeithley_ kaptk2: what version, what Linux dist? Certainly in my experience if the glusterd is enabled (systemctl enable glusterd with systemd, chkconfig --add glusterd with init.d) and the volume was started before the reboot, the volume will be autostarted by glusterd.
17:54 kaptk2 kkeithley: I am using Gluster 3.4 on Fedora 19
17:55 kkeithley_ failshell: you should blog about it. Someone was asking about that earlier today
17:57 johnmark PatNarciso: happy thursday :)
17:58 johnmark w00t!
17:58 johnmark http://glustermm-tigert.rhcloud.com/
17:58 glusterbot Title: Massive Storage. Delivered. Gluster (at glustermm-tigert.rhcloud.com)
17:58 johnmark latest staging
17:58 johnmark of hte new site design
18:00 kaptk2 kkeithley_: here is a paste of my etc-glusterfs-glustered.vol.log
18:00 kaptk2 http://fpaste.org/44132/80823157/
18:00 glusterbot Title: #44132 Fedora Project Pastebin (at fpaste.org)
18:00 kaptk2 Looks like it is trying to come up to early, like maybe before the network?
18:01 failshell kkeithley_: i have one last thing to iron out. i cant write to samba
18:01 failshell only read
18:01 ngoswami joined #gluster
18:02 kkeithley_ things like getaddrinfo failed (Name or service not known), DNS resolution failed on host slag.rocky.edu, error in getaddrinfo: Temporary failure in name resolution, etc., make me think you have DNS problems.
18:03 kkeithley_ Or it's trying to start before the network is up
18:05 kkeithley_ Although according to the systemd glusterd.service file it starts after network.target and rpcbind.service
18:06 wushudoin joined #gluster
18:10 vpshastry left #gluster
18:12 vpshastry joined #gluster
18:14 davinder joined #gluster
18:17 B21956 joined #gluster
18:26 dbruhn joined #gluster
18:29 dbruhn is there a reason the mnt*.log file doesn't get rotated with the rest of the files?
18:29 dbruhn 3.3.1
18:42 Guest34797 joined #gluster
18:46 kaptk2 kkeithley_: DNS is fine, because if I do a service glusterd start if fires right up.
18:47 kaptk2 kkeithley_: I wonder since it uses bridged networking if that is what is causing the issue.
18:48 semiosis dbruhn: glusterd's log rotation only applies to server log files.  client log files are written by the client independently.  i use logrotate with copytruncate to take care of those
18:48 dbruhn semiosis do you have an example of your logrotate config I can look at?
18:49 semiosis http://pastie.org/8375606
18:49 glusterbot Title: #8375606 - Pastie (at pastie.org)
18:49 semiosis pretty simple
18:49 dbruhn Appreciate it!
18:50 dbruhn So you are keeping 14 days worth of logs and rotating it daily?
18:50 vpshastry left #gluster
18:50 semiosis apparantly :)
18:50 dbruhn lol
18:53 semiosis grr, my auto spell checker was out to lunch
18:53 semiosis s/apparantly/apparently/
18:53 glusterbot What semiosis meant to say was: apparently :)
18:54 bennyturns joined #gluster
19:11 andreask joined #gluster
19:13 semiosis joined #gluster
19:14 semiosis :O
19:20 abradley joined #gluster
19:35 purpleidea joined #gluster
19:35 purpleidea joined #gluster
19:46 DV joined #gluster
19:56 ctria joined #gluster
20:05 PatNarciso You fellas got a sec?  I've got a prep doc related to how I'm going to move forward with my gluster setup. Would appreciate any feedback you guys got.  https://docs.google.com/document/d/1UBbH​IMJuUXacr1RKfIUctukDbg8bozBixzI0KHQJ5cQ
20:05 glusterbot <http://goo.gl/U405ll> (at docs.google.com)
20:12 adamb joined #gluster
20:12 DV__ joined #gluster
20:51 johnmark PatNarciso: interesting - if you want to write that up in a quick blog post, we could blast it to a wider audience
20:51 DV joined #gluster
21:13 DV__ joined #gluster
21:16 [o__o] left #gluster
21:18 [o__o] joined #gluster
21:20 khushildep joined #gluster
21:25 PatNarciso johnmark, I'll do that.  I'm a bit backed up today -- cool if I get back to you on it within the next few days?
21:34 PatNarciso you know what sucks?  having a file system with redundancy out of the ass.  and then having a user who deletes a bunch of files without an undelete option.  I gotta work on that.
21:51 johnmark LOL
21:51 johnmark doh
21:59 phox yay =/ so with 3.4.1 I have files sporadically claiming I don't have permissions to read them, links claiming they don't go anywhere, and other weird shit like that =/
21:59 phox inclined to recreate the bricks from scratch =\
22:03 dbruhn left #gluster
22:08 phox unless there's anything I should do to possibly correct that
22:08 phox like reading every single file from everywhere
22:12 nasso joined #gluster
22:38 RedShift joined #gluster
22:38 RedShift hi all
22:39 RedShift imagine the following scenario, a gluster setup hosting virtual machines (continous I/O). I have node1 and node2, node1 is active and node2 is backup. node1 & node2 need servicing which require them to be taken offline.
22:40 RedShift I take node1 offline, the failover mechanism puts node2 active, thus all writes proceed on node2.
22:40 RedShift after node1 has been brought back online, all changes replicate to node1 from node2
22:41 RedShift but how can I tell if the replication has been completed and I can take node2 offline? Since a lot of virtual machines can cause a lot of IO, there might not be a point where the number of files to be healed is 0
22:42 RedShift if I take node2 offline too soon, it will have pending healing data to be transferred to node1 and node1 will already have new data because failover will have put node1 in charge because node2 was offline
22:42 RedShift thus - a split brain?
22:49 tryggvil joined #gluster
23:02 * phox starts imagining but then realizes it's time to go home
23:06 aliguori joined #gluster
23:28 jbrooks joined #gluster
23:31 RicardoSSP joined #gluster
23:31 RicardoSSP joined #gluster
23:51 vpshastry joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary