Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 jaank joined #gluster
00:10 Cenbe I want to create a kvm/qemu VM on gluster (two servers, one brick on each, replicated). Fedora 21, gluster 3.5.3.
00:10 Cenbe I can create a gluster storage pool from a third machine (also Fedora 21) with virt-manager.
00:10 Cenbe But when creating a volume in the pool, I get "Libvirt version does not support storage cloning".
00:10 Cenbe What am I doing wrong?
00:15 Pupeno joined #gluster
00:29 gildub joined #gluster
00:51 T3 joined #gluster
01:00 RicardoSSP joined #gluster
01:00 RicardoSSP joined #gluster
01:28 plarsen joined #gluster
01:38 T3 joined #gluster
01:49 rcampbel3 thanks for the Gluster 3.6.2 package for Ubuntu - just upgraded :)
01:49 _Bryan_ joined #gluster
01:55 B21956 joined #gluster
01:55 B21956 left #gluster
02:02 harish joined #gluster
02:04 rcampbel3 If I do gluster volume set $VOLNAME performance.cache-size 1GB on one node... it will do it on all nodes, right?
02:16 wkf joined #gluster
02:17 haomaiwa_ joined #gluster
02:20 gem_ joined #gluster
02:46 rjoseph joined #gluster
02:46 nangthang joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:49 T3 joined #gluster
02:50 yeep joined #gluster
03:09 victori joined #gluster
03:16 shylesh__ joined #gluster
03:54 jobewan joined #gluster
03:58 atinmu joined #gluster
03:58 suman_d_ joined #gluster
04:04 rafi joined #gluster
04:07 anoopcs joined #gluster
04:07 nbalacha joined #gluster
04:09 bala joined #gluster
04:11 shubhendu joined #gluster
04:15 h4rry_ joined #gluster
04:17 meghanam joined #gluster
04:32 julim joined #gluster
04:33 gem joined #gluster
04:37 spandit joined #gluster
04:41 sakshi joined #gluster
04:45 jiffin joined #gluster
04:46 jiffin joined #gluster
04:49 Intensity joined #gluster
04:50 victori joined #gluster
04:55 Manikandan joined #gluster
05:00 h4rry__ joined #gluster
05:04 elico joined #gluster
05:10 ndarshan joined #gluster
05:18 h4rry_ joined #gluster
05:19 nshaikh joined #gluster
05:21 rjoseph joined #gluster
05:21 dusmant joined #gluster
05:25 prasanth_ joined #gluster
05:27 meghanam joined #gluster
05:39 kdhananjay joined #gluster
05:39 soumya__ joined #gluster
05:42 vikumar joined #gluster
05:44 sdebnath__ joined #gluster
05:45 sahina joined #gluster
05:50 ramteid joined #gluster
05:58 maveric_amitc_ joined #gluster
06:02 atalur joined #gluster
06:03 dusmant joined #gluster
06:06 atalur joined #gluster
06:16 sputnik13 joined #gluster
06:29 nishanth joined #gluster
06:31 raghu joined #gluster
06:33 suman_d joined #gluster
06:38 h4rry_ joined #gluster
06:39 suman_d_ joined #gluster
06:40 sputnik13 joined #gluster
06:41 jobewan joined #gluster
06:42 rjoseph joined #gluster
07:09 mbukatov joined #gluster
07:11 nangthang joined #gluster
07:12 deniszh joined #gluster
07:15 SOLDIERz joined #gluster
07:19 jtux joined #gluster
07:29 deniszh1 joined #gluster
07:32 deniszh joined #gluster
07:32 deniszh left #gluster
07:37 sdebnath__ joined #gluster
07:38 lalatenduM joined #gluster
07:38 h4rry_ joined #gluster
07:44 maveric_amitc_ joined #gluster
07:54 Philambdo joined #gluster
07:55 kovshenin joined #gluster
08:05 kkeithley1 joined #gluster
08:19 kanagaraj joined #gluster
08:22 dusmantkp_ joined #gluster
08:23 anrao joined #gluster
08:32 bala joined #gluster
08:37 anoopcs joined #gluster
08:40 rwheeler joined #gluster
08:43 suman_d_ joined #gluster
08:47 nishanth joined #gluster
08:50 anoopcs joined #gluster
08:51 ramteid joined #gluster
08:51 overclk joined #gluster
08:51 ppai joined #gluster
08:53 lalatenduM joined #gluster
08:54 rjoseph joined #gluster
08:56 smohan joined #gluster
08:58 anil joined #gluster
09:00 blue_ joined #gluster
09:03 Pupeno joined #gluster
09:03 Pupeno joined #gluster
09:05 Slashman joined #gluster
09:11 bala joined #gluster
09:11 hagarth joined #gluster
09:15 shubhendu joined #gluster
09:22 aravindavk joined #gluster
09:27 atalur_ joined #gluster
09:29 T0aD joined #gluster
09:32 sdebnath__ joined #gluster
09:36 Norky joined #gluster
09:40 soumya_ joined #gluster
09:51 aravindavk joined #gluster
09:53 deniszh joined #gluster
09:57 shubhendu joined #gluster
10:03 lyang0 joined #gluster
10:04 priynag joined #gluster
10:04 priynag Heya! We are working towards building an Open Source IRC client called Scrollback.io where we log chats and use NLP to group them into meaningful conversations, like micro-forums.
10:04 priynag Is it okay if we log #gluster?
10:04 priynag We are already logging a few freenode channels like node.js - scrollback.io/nodejs
10:05 liquidat joined #gluster
10:06 hagarth priynag: should be fine
10:06 priynag hagarth: Thank you :)
10:16 ricky-ti1 joined #gluster
10:16 priynag left #gluster
10:34 sputnik13 joined #gluster
10:35 elico joined #gluster
10:40 stickyboy joined #gluster
10:42 anoopcs joined #gluster
10:54 ralalala joined #gluster
10:55 rjoseph joined #gluster
10:55 meghanam joined #gluster
10:57 soumya_ joined #gluster
11:00 hybrid5121 joined #gluster
11:04 bala joined #gluster
11:15 sdebnath__ joined #gluster
11:16 ppai joined #gluster
11:16 Manikandan_ joined #gluster
11:16 aravindavk joined #gluster
11:18 bene2 joined #gluster
11:41 T3 joined #gluster
12:02 __zerick joined #gluster
12:02 B21956 joined #gluster
12:06 _zerick_ joined #gluster
12:06 ildefonso joined #gluster
12:08 SOLDIERz joined #gluster
12:18 prasanth_ joined #gluster
12:19 rjoseph joined #gluster
12:20 bala joined #gluster
12:22 [Enrico] joined #gluster
12:22 ira joined #gluster
12:33 LebedevRI joined #gluster
12:33 DV joined #gluster
12:40 nishanth joined #gluster
12:41 XpineX joined #gluster
12:46 coredump joined #gluster
12:48 glusterbot News from newglusterbugs: [Bug 1189473] [RFE] While creating a snapshot the timestamp has to be appended to the snapshot name. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1189473>
12:50 nangthang joined #gluster
12:56 kanagaraj joined #gluster
13:00 ralalala joined #gluster
13:02 B21956 joined #gluster
13:06 Slashman joined #gluster
13:08 rafi joined #gluster
13:10 smohan_ joined #gluster
13:14 anoopcs joined #gluster
13:16 calisto joined #gluster
13:27 Norky joined #gluster
13:37 wkf joined #gluster
13:40 smohan joined #gluster
13:42 nishanth joined #gluster
13:47 Manikandan joined #gluster
13:51 fuxiulian joined #gluster
13:51 fuxiulian Hello
13:51 glusterbot fuxiulian: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:52 fuxiulian I've tried GlusterFS and it looks like I may have tripped over a performance problem
13:53 fuxiulian When uploading 2000 files in a single folder the CPU is stuck at 100% for a large period of time
13:53 fuxiulian this doesn't happen when I upload 20 folders with 100 files each
13:53 fuxiulian That may point to an O(n^2) complexity code
13:53 Folken__ what CPU have you got?
13:54 Folken__ and what kind of filesystem is it
13:54 fuxiulian I'm running that in AWS. The same behavior happens with m3.medium and m3.xlarge
13:54 fuxiulian I'm using ext4
13:55 plarsen joined #gluster
13:55 fuxiulian m3.xlarge seems to be  Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
13:56 nbalacha joined #gluster
13:57 fuxiulian if I writes the files to the drive bypassing gluster i don't see the O(n^2) behavior, so the issue doesn't seem to be related to ext4
13:57 fuxiulian write*
14:02 bala joined #gluster
14:03 kenansulayman joined #gluster
14:03 kenansulayman hey. Currently trying to setup gluster. can I set it up w/o a dedicated disk?
14:04 kenansulayman Just a folder that is
14:04 Manikandan joined #gluster
14:04 kkeithley_ kenansulayman: yes, you can just use a folder
14:05 kenansulayman Okay! So that's what I have right now:
14:05 mbukatov joined #gluster
14:05 kenansulayman Two nodes running and the first node probed the second one: http://data.sly.mn/image/2H3j0I1x1K1b
14:06 kenansulayman How do I start with setting up the volume? :F
14:09 Arminder joined #gluster
14:17 harish joined #gluster
14:20 Gill joined #gluster
14:23 R0ok_ joined #gluster
14:26 kenansulayman No one?
14:30 jobewan joined #gluster
14:37 ndarshan joined #gluster
14:37 bennyturns joined #gluster
14:39 marbu joined #gluster
14:39 maveric_amitc_ joined #gluster
14:41 hagr joined #gluster
14:44 aravindavk joined #gluster
14:45 diegows joined #gluster
14:45 dgandhi joined #gluster
14:48 anrao joined #gluster
14:48 sac`away joined #gluster
14:48 vikumar joined #gluster
14:48 hchiramm joined #gluster
14:49 rastar_afk joined #gluster
14:50 kanagaraj joined #gluster
14:51 Nicodonald joined #gluster
14:52 Nicodonald Hello everyone
14:53 Nicodonald I'm looking for a little help regarding gluster command
14:54 Nicodonald For monitoring purpose, I made a script that executes gluster volume status command
14:54 Nicodonald for this, I need to be root, or to sudo the command, but is there a workaround tu allow a non-root user to do this ?
15:00 ppai joined #gluster
15:01 ProT-0-TypE joined #gluster
15:01 kkeithley_ kenansulayman:   http://www.gluster.org/community/d​ocumentation/index.php/QuickStart   After probing the peer(s), use `gluster volume create $volname $node1:$path1 $node2:$path2` to create a volume on two nodes
15:02 kkeithley_ Nicodonald: not at this time
15:03 kenansulayman Say I want to allocate the folder /brick on node1 and /brick on node 2.. that would be gluster volume mycluster create node01:/brick node02:/brick
15:03 kkeithley_ correct
15:04 kkeithley_ gluster volume create mycluster node1:/brick node2:/brick
15:04 Nicodonald kkeithley_: thank you
15:04 kkeithley_ that creates a 'distribute' volume
15:04 squizzi joined #gluster
15:05 kkeithley_ since you're using directories (folders) you need to append 'force' to the end of the command
15:05 kenansulayman hm.. volume create: mycluster: failed: Host node1 is not in 'Peer in Cluster' state
15:06 kenansulayman Should ne "host names" be the FQDN of the hostnames? e.g. node.mydomain.tld?
15:06 kkeithley_ you probed node2 from node1?
15:06 kenansulayman yes
15:06 ndevos @hostnames
15:06 glusterbot ndevos: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
15:06 kkeithley_ you need hostnames in DNS or /etc/hosts,  hostname of host1 must resolve to its name in DNS or /etc/hosts
15:07 kenansulayman aha! Let me try :-)
15:10 elico joined #gluster
15:12 kenansulayman hm, it now says: volume create: mycluster: failed: /brick or a prefix of it is already part of a volume but gluster volume info all says "No volumes present"
15:12 glusterbot kenansulayman: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
15:14 lmickh joined #gluster
15:16 kkeithley_ see what I wrote above. Use 'force' at the end of the volume create command
15:16 nbalacha joined #gluster
15:16 wushudoin joined #gluster
15:17 kkeithley_ or do what glusterbot said to do
15:18 glusterbot News from resolvedglusterbugs: [Bug 1174247] Glusterfs outputs a lot of warnings and errors when quota is enabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1174247>
15:19 kenansulayman mhpf. I got around that issue, now it's getting weird: volume create: mycluster: failed: Host node02 is not in 'Peer in Cluster' state
15:20 kenansulayman oh I see. I have to add the hostnames to the etc/hosts then establish the pool
15:21 kenansulayman niiice! volume create: mycluster: success: please start the volume to access data
15:21 kkeithley_ all the hostnames in the /etc/hosts on all the nodes
15:21 kkeithley_ there you go
15:22 kenansulayman Thank you!
15:22 ndarshan joined #gluster
15:23 kkeithley_ yw
15:28 jobewan joined #gluster
15:34 victori joined #gluster
15:35 B21956 joined #gluster
15:35 virusuy joined #gluster
15:42 vikumar__ joined #gluster
15:42 hchiramm_ joined #gluster
15:42 sac`away` joined #gluster
15:44 jobewan joined #gluster
15:45 jmarley joined #gluster
15:57 rastar_afk joined #gluster
16:01 aravindavk joined #gluster
16:03 Durzo joined #gluster
16:03 Durzo hey guys, just updated from 3.6.1 to 3.6.2 and now a 'volume heal <vol> info' returns 'Volume heal failed' - any ideas? is this normal?
16:04 Durzo the SHD is running on both bricks, but the Port says N/A
16:04 Durzo restarted both bricks with no change
16:05 T3 JoeJulian++
16:05 glusterbot T3: JoeJulian's karma is now 19
16:05 T3 for life saving blog posts
16:05 sac`away joined #gluster
16:06 RaSTarl joined #gluster
16:06 hchiramm joined #gluster
16:08 Durzo oo redhat people
16:08 Durzo hello
16:08 glusterbot Durzo: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:08 Durzo bleh :/
16:09 LebedevRI joined #gluster
16:10 Durzo just updated from 3.6.1 to 3.6.2 and now a 'volume heal <vol> info' returns 'Volume heal failed' - any ideas? is this normal? SHD is running on both nodes
16:11 smohan joined #gluster
16:15 hchiramm_ joined #gluster
16:15 sac`away` joined #gluster
16:20 hagarth joined #gluster
16:20 rastar_afk joined #gluster
16:21 blue_vd hello, everybody. Does anyone know who actually replicates data in a striped/replicated setup ? Is it the gluster client that sends chunks of data twice over the network, first on one replica and then on the other ? Or it's one of the replicas that opens a connection to the other and receives the copied data ? Question originates in my observation that on a striped/replicated volume of 4 bricks i can only get about 50MB/sec write speed which would be c
16:21 blue_vd onsistent with 100MB/s (gigabit network limit) shared between two streams of data
16:24 nishanth joined #gluster
16:24 ProT-0-TypE joined #gluster
16:27 semiosis blue_vd: FUSE clients do the replication themselves.  NFS clients rely on the server to do the replication
16:27 Durzo semiosis, dude any idea about my heal issue above? was working perfectly in 3.6.1
16:28 semiosis Durzo: no idea
16:28 Durzo same thing happened in 3.5.1, aprntly you left out the glfsheal binary from the package.. but i cant see that binary anywhere in 3.6.1 either
16:28 semiosis are you using the PPA package I just published this week?  3.6.2 for trusty?
16:28 Durzo yeah
16:28 semiosis hmmm i'll double check on that
16:28 Durzo going off bug 1113778
16:29 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1113778 medium, unspecified, ---, pkarampu, ASSIGNED , gluster volume heal info keep reports "Volume heal failed"
16:29 semiosis could you please file an issue here: https://github.com/semiosi​s/glusterfs-debian/issues
16:29 Durzo its the exact same thing, just with 3.6.2 from 3.6.1
16:29 blue_vd semiosis, it neither, i think : i use qemu/kvm which uses libgluster directly
16:29 semiosis Durzo: about the missing executable
16:29 Durzo 3.6.1 doesnt have the binary, but heal succeeds
16:29 semiosis blue_vd: that's equivalent to a FUSE client
16:29 blue_vd bummer
16:30 blue_vd so the official word would be that the maximum write speed a client could achieve would be (client bandwidth / number of replicas), right ?
16:31 semiosis blue_vd: sounds right to me
16:32 blue_vd peer2peer realtime mirroring between replicas would be soooo useful then. Feature request ? :D
16:33 semiosis blue_vd: ,,(nsr)
16:33 glusterbot blue_vd: I do not know about 'nsr', but I do know about these similar topics: 'nsr pdf'
16:33 semiosis ehh
16:33 semiosis blue_vd: it's already in the works, called NSR, but no idea when it's going to happen
16:33 semiosis @learn nsr as http://blog.gluster.org/201​4/04/new-style-replication/
16:34 glusterbot semiosis: The operation succeeded.
16:34 blue_vd newstyle replication ?
16:34 soumya_ joined #gluster
16:35 Durzo semiosis, https://github.com/semiosis​/glusterfs-debian/issues/6 - sorry for bad formatting.. github seems to have done something weird with my "#"
16:36 doekia joined #gluster
16:36 kkeithley1 joined #gluster
16:36 semiosis Durzo: that's markdown doing its thing
16:36 Durzo fixed
16:36 Durzo do you need any other info?
16:37 semiosis if you indent with 4 spaces markdown renders it as code
16:37 semiosis reload
16:37 Durzo im just a bit paranoid.. maybe i should downgrade to 3.6.1
16:37 semiosis maybe
16:37 Durzo can i even do that?
16:37 Durzo will it work
16:37 semiosis dunno
16:37 Durzo gah :/
16:38 TvL2386 joined #gluster
16:38 blue_vd semiosis, thanks, that's exactly the issue :) Hope this would be implemented soon(ish)
16:39 jdossey joined #gluster
16:39 Durzo k one last question before bed.. have you ever seen gluster cause problems with XFS? every 8 or so hours im seeing one of my bricks do this: http://fpaste.org/181194/30237931/raw/ - seems gluster related, if i leave the brick out of the volume it doesnt happen anymore
16:40 semiosis i've never seen it, but maybe someone else has
16:41 ralalala joined #gluster
16:41 jdossey Durzo: have you tried running xfs_repair?
16:42 Durzo jdossey, i have, and it succeeds without issue, but the issue happens again and again
16:42 Durzo if i leave gluster offline, the issue doesnt happen
16:42 Durzo whether thats gluster causing, or the fault being caused by gluster using the disk.. i am unsure...
16:43 Durzo this is an AWS ec2 instance, so i dont suspect disk issues
16:43 jdossey Durzo: then it looks like gluster is bumping into an xfs bug
16:44 semiosis lets not jump to conclusions :)
16:44 Durzo its a hard link to make
16:44 semiosis Durzo: are you using the official ubuntu AMIs?
16:44 semiosis is the brick EBS or instance-store?
16:44 Durzo semiosis, yeah, 14.04 cloud image with kernel 3.13.0-45
16:44 Durzo EBS
16:44 Durzo 2 volume gp-2 in raid0
16:44 Durzo i have snapshotted & re-launched the array as new volumes
16:45 Durzo nothing seems to help
16:45 semiosis why raid0?
16:45 semiosis can you try without that?
16:45 Durzo taking advantage of burstable gp2 iops
16:46 semiosis seems to me it's just as likely the thing below xfs (md) is causing problems as the thing above it (gluster)
16:46 Durzo semiosis, i can try anything.. little stumped as to how.. can one simply rsync a brick onto a new fs ?
16:46 Durzo can i start with an empty brick and heal it up?
16:46 semiosis Durzo: yes, be sure to use --aHAX --inplace
16:47 semiosis you can let glusterfs heal also
16:47 semiosis either should work
16:47 Durzo semiosis, empty brick i would need to rebalance right?
16:47 semiosis nope
16:47 Durzo oh
16:47 Durzo so i can just swap out the underlying ebs volume and start glusterd ?
16:47 Durzo wont the volume info be gone?
16:47 semiosis just kill the brick process (or stop the whole volume) mount a new filesystem at the brick mount point, then start volume force and heal should take over
16:48 Durzo right
16:48 semiosis ,,(processes)
16:48 Durzo 197GB of data
16:48 glusterbot The GlusterFS core uses three process names: glusterd (management daemon, one per server); glusterfsd (brick export daemon, one per brick); glusterfs (FUSE client, one per client mount point; also NFS daemon, one per server). There are also two auxiliary processes: gsyncd (for geo-replication) and glustershd (for automatic self-heal).
16:49 gem joined #gluster
16:49 Durzo semiosis, thanks.. how long do you estimate before that heal info will be fixed in the 3.6.2 packages? tossing up downgrading or waiting before i do this.. bit of a pain to heal an entire brick without being able to use the info cmd
16:50 semiosis i'll try to get to it tonight
16:50 Durzo ok, il see if these xfs errors persist on 3.6.2 in the meantime
16:50 Durzo thanks heaps
16:51 semiosis yw
16:51 * Durzo sleeps
16:58 victori joined #gluster
17:02 SOLDIERz joined #gluster
17:08 gildub joined #gluster
17:13 lalatenduM joined #gluster
17:16 bala joined #gluster
17:19 harish joined #gluster
17:20 MacWinner joined #gluster
17:20 syntaxerrors joined #gluster
17:20 MacWinner joined #gluster
17:23 suman_d joined #gluster
17:24 h4rry joined #gluster
17:24 talisker joined #gluster
17:26 talisker Hi, we rolled out GlusterFS for one of applications (iWay) recently. Initial tests were fine, but lately a few errors have cropped up ...
17:26 talisker [2015-02-05 15:12:47.789519] W [fuse-bridge.c:422:fuse_entry_cbk] 0-glusterfs-fuse: 765792: LOOKUP() /ibi_apps/l8/focexec/nl8cunum.fex => -1 (Permission denied) [2015-02-05 15:12:47.790967] W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-mdcgv-client-0: remote operation failed: Permission denied. Path:
17:26 syntaxerrors Hi. I've recently added a second gluster server and 5 EBS volumes to compliment my single gluster-15 EBS volume storage but now I'd like to move some of those 15 ebs volumes to the second gluster server in order to more evenly distribute the IO load. Is it advisable to still use 'replace-brick' to migrate to the other server or can I save time by virtually detaching from the first and re-attaching the ebs
17:26 syntaxerrors volume to the second server instance in aws?
17:27 talisker what permission is being referred to here? It can't be the account's access to the path as that has been verified to be correct
17:28 talisker Has anybody experienced similar permission errors? If so and you've fixed them, can you please shed some light?
17:32 free_amitc_ joined #gluster
17:33 amitc__ joined #gluster
17:34 kanagaraj joined #gluster
17:35 shubhendu joined #gluster
17:36 victori joined #gluster
17:37 rcampbel3 joined #gluster
17:37 hchiramm joined #gluster
17:38 rastar_afk joined #gluster
17:39 lalatenduM joined #gluster
17:39 syntaxerrors left #gluster
17:40 hchiramm_ joined #gluster
17:41 RaSTarl joined #gluster
17:42 syntaxerrors joined #gluster
17:43 sac`away joined #gluster
17:43 MacWinner joined #gluster
17:46 Philambdo joined #gluster
17:53 ricky-ticky1 joined #gluster
18:01 virusuy hi guys
18:01 virusuy i have an issue trying to replace a brick in our 4-node gluster
18:02 virusuy my shiny node seems to unable to start glusterfs-server because one of the nodes is not correctly declared
18:02 virusuy but, i cannot "remove" it because, the service isn't running, so, i'm in a catch-22 situation
18:10 Rapture joined #gluster
18:11 sputnik13 joined #gluster
18:16 tg2 fix the config
18:17 tg2 or just force remove it, and copy the files back into the array from the underlying brick
18:19 virusuy Can I rsync /var/lib/glusterd/peers/ files from other node ?
18:27 suman_d joined #gluster
18:28 semiosis syntaxerrors: you should be able to stop the volume, move the ebs vols to the other server, and use replace-brick commit force to update the volume config.  that's the general idea.  try it out on a test volume to get the procedure right before trying it on a real volume
18:31 syntaxerrors semiosis: thanks for the tip.
18:32 virusuy tg2: nevermind
18:33 virusuy tg2: i ran glusterd --debug and error was just in front of my eyes
18:40 fandi joined #gluster
18:42 virusuy well, now i'm stuck in "Sent and Received peer request" status
18:47 blue_vd joined #gluster
18:48 ralalala joined #gluster
18:59 tg2 virusuy, what version?
19:02 hchiramm__ joined #gluster
19:05 redbeard joined #gluster
19:05 maveric_amitc_ joined #gluster
19:20 squizzi joined #gluster
19:21 Philambdo joined #gluster
19:28 deniszh joined #gluster
19:34 bennyturns joined #gluster
19:46 drankis joined #gluster
19:53 ralalalala joined #gluster
19:54 deniszh joined #gluster
19:57 ralala joined #gluster
20:03 ITSpete joined #gluster
20:04 ITSpete left #gluster
20:08 jbrooks joined #gluster
20:10 Philambdo joined #gluster
20:11 deniszh joined #gluster
20:12 krullie joined #gluster
20:14 bala joined #gluster
20:15 coredump|br joined #gluster
20:19 squizzi joined #gluster
20:37 SOLDIERz joined #gluster
20:44 verboese joined #gluster
20:45 ralala joined #gluster
20:46 badone joined #gluster
20:56 XpineX joined #gluster
20:57 jobewan joined #gluster
20:58 jobewan joined #gluster
21:03 siel joined #gluster
21:06 shaunm joined #gluster
21:06 ralala joined #gluster
21:10 B21956 joined #gluster
21:21 ralala joined #gluster
21:27 XpineX joined #gluster
21:37 badone joined #gluster
21:39 SOLDIERz joined #gluster
21:42 ralala joined #gluster
21:43 gildub_ joined #gluster
21:48 ralala joined #gluster
21:50 glusterbot News from newglusterbugs: [Bug 858732] glusterd does not start anymore on one node <https://bugzilla.redhat.com/show_bug.cgi?id=858732>
21:51 dgandhi joined #gluster
21:53 dgandhi joined #gluster
21:53 dgandhi joined #gluster
21:56 jobewan joined #gluster
22:01 rcampbel3 Can anyone help with my fstab entry - I believe I want to disable direct IO mode... here's what I have: 10.17.11.168:/data /mnt/data glusterfs defaults,direct-io-mode=disable,_​netdev,backupvolfile-server=10.1
22:04 rcampbel3 (last part is IP that was truncated followed by 0 0) Also, now... when I added the direct-io-mode=... I get this on client mount: "WARNING: getfattr not found, certain checks will be skipped.."
22:15 Lee- joined #gluster
22:16 Lee- joined #gluster
22:17 lalatenduM joined #gluster
22:21 badone joined #gluster
22:21 marcoceppi joined #gluster
22:30 XpineX joined #gluster
22:44 SOLDIERz joined #gluster
23:15 marcoceppi joined #gluster
23:43 jmarley joined #gluster
23:49 SOLDIERz joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary