Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-23

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 tyl0r joined #gluster
00:12 firemanxbr joined #gluster
00:15 xleo joined #gluster
00:26 T0aD joined #gluster
00:42 plarsen joined #gluster
00:55 sputnik13 joined #gluster
01:01 B21956 joined #gluster
01:09 tristanz joined #gluster
01:13 hagarth joined #gluster
01:21 bala joined #gluster
01:35 tristanz left #gluster
01:59 Peter4 i think my gluster is broken … everytime job started (writing) we got brick error
02:00 Peter4 http://pastie.org/9413749
02:00 glusterbot Title: #9413749 - Pastie (at pastie.org)
02:02 Peter4 and bricks still keep crashing
02:03 Peter4 http://pastie.org/9413757
02:03 glusterbot Title: #9413757 - Pastie (at pastie.org)
02:03 Peter4 noticed these on the glustershd.lgo
02:03 Peter4 why it's using 3.3??
02:03 Peter4 we do not have gfs client
02:03 Peter4 and all the servers are 3.5.1
02:05 jiku joined #gluster
02:06 Peter4 help!!!
02:25 haomaiwang joined #gluster
02:37 haomai___ joined #gluster
03:12 meghanam joined #gluster
03:12 meghanam_ joined #gluster
03:14 bharata-rao joined #gluster
03:15 MacWinner joined #gluster
03:25 XpineX_ joined #gluster
03:32 shubhendu_ joined #gluster
03:33 haomaiwa_ joined #gluster
03:41 kanagaraj joined #gluster
03:41 jobewan joined #gluster
03:43 itisravi joined #gluster
03:53 nbalachandran joined #gluster
04:16 meghanam__ joined #gluster
04:16 atinmu joined #gluster
04:17 meghanam joined #gluster
04:23 ndarshan joined #gluster
04:24 Rafi_kc joined #gluster
04:29 haomai___ joined #gluster
04:31 anoopcs joined #gluster
04:38 nishanth joined #gluster
04:41 kdhananjay joined #gluster
04:44 jiffin joined #gluster
04:54 spandit joined #gluster
04:59 ramteid joined #gluster
05:04 ppai joined #gluster
05:14 kumar joined #gluster
05:21 vpshastry joined #gluster
05:27 prasanth_ joined #gluster
05:27 andreask joined #gluster
05:29 deepakcs joined #gluster
05:30 saurabh joined #gluster
05:31 kshlm joined #gluster
05:32 aravindavk joined #gluster
05:39 ndarshan joined #gluster
05:44 Philambdo joined #gluster
05:49 shubhendu_ joined #gluster
05:55 lalatenduM joined #gluster
06:05 sputnik13 joined #gluster
06:06 psharma joined #gluster
06:10 hchiramm joined #gluster
06:14 raghu joined #gluster
06:28 ppai joined #gluster
06:32 vkoppad joined #gluster
06:49 sputnik13 joined #gluster
06:52 edong23 joined #gluster
06:53 getup- joined #gluster
06:55 hchiramm joined #gluster
06:59 ekuric joined #gluster
07:05 wgao joined #gluster
07:08 deepakcs joined #gluster
07:10 andreask joined #gluster
07:19 keytab joined #gluster
07:21 aravindavk joined #gluster
07:24 harish__ joined #gluster
07:35 giannello joined #gluster
07:37 ricky-ti1 joined #gluster
07:37 sputnik13 joined #gluster
07:42 glusterbot New news from newglusterbugs: [Bug 1122395] man or info page of gluster needs to be updated with self-heal commands. <https://bugzilla.redhat.com/show_bug.cgi?id=1122395> || [Bug 1109613] gluster volume create fails with ambiguous error <https://bugzilla.redhat.com/show_bug.cgi?id=1109613>
07:45 ekuric joined #gluster
07:45 hchiramm joined #gluster
07:53 ppai joined #gluster
08:00 aravindavk joined #gluster
08:06 sputnik13 joined #gluster
08:10 Pupeno joined #gluster
08:12 glusterbot New news from newglusterbugs: [Bug 1122417] Writing data to a dispersed volume mounted by NFS fails <https://bugzilla.redhat.com/show_bug.cgi?id=1122417>
08:15 liquidat joined #gluster
08:19 hchiramm joined #gluster
08:48 sputnik13 joined #gluster
08:48 cultavix joined #gluster
08:48 Philambdo joined #gluster
08:49 ppai joined #gluster
08:57 sputnik13 joined #gluster
08:57 vikumar joined #gluster
09:03 sputnik13 joined #gluster
09:09 y4m4 joined #gluster
09:09 y4m4 joined #gluster
09:09 eightyeight joined #gluster
09:11 haomaiwa_ joined #gluster
09:15 haomaiw__ joined #gluster
09:20 ppai joined #gluster
09:34 kanagaraj joined #gluster
09:39 ppai joined #gluster
09:43 glusterbot New news from newglusterbugs: [Bug 1122443] Symlinks change date while migrating <https://bugzilla.redhat.com/show_bug.cgi?id=1122443>
09:47 Philambdo1 joined #gluster
09:51 kdhananjay joined #gluster
09:55 hchiramm joined #gluster
09:59 LebedevRI joined #gluster
10:02 bala joined #gluster
10:19 meghanam joined #gluster
10:20 meghanam__ joined #gluster
10:37 mbukatov joined #gluster
10:37 lkoranda joined #gluster
10:40 rjoseph joined #gluster
10:51 xleo joined #gluster
10:54 ekuric joined #gluster
10:55 edward1 joined #gluster
10:56 ppai joined #gluster
11:03 diegows joined #gluster
11:05 ekuric joined #gluster
11:10 atinmu joined #gluster
11:17 psharma joined #gluster
11:19 92AAAAQFW joined #gluster
11:20 kdhananjay joined #gluster
11:22 ppai joined #gluster
11:34 qdk joined #gluster
11:39 simulx joined #gluster
11:42 sputnik13 joined #gluster
11:43 ppai joined #gluster
11:50 psharma joined #gluster
11:59 ctria joined #gluster
12:07 rjoseph joined #gluster
12:13 glusterbot New news from newglusterbugs: [Bug 1122509] Restarting glusterd to bring a offline brick online is also restarting nfs and glustershd process <https://bugzilla.redhat.com/show_bug.cgi?id=1122509>
12:22 bene2 joined #gluster
12:30 rwheeler joined #gluster
12:34 Rafi_kc joined #gluster
12:48 Slashman joined #gluster
12:49 recidive joined #gluster
12:51 theron joined #gluster
12:58 hagarth joined #gluster
13:00 julim joined #gluster
13:05 bala joined #gluster
13:05 rjoseph joined #gluster
13:06 plarsen joined #gluster
13:07 vshankar joined #gluster
13:11 harish__ joined #gluster
13:13 glusterbot New news from newglusterbugs: [Bug 1122533] tests/bug-961307.t: Echo output string in case of failure for easy debug <https://bugzilla.redhat.com/show_bug.cgi?id=1122533>
13:14 nishanth joined #gluster
13:15 sjm joined #gluster
13:24 kanagaraj joined #gluster
13:25 cultavix joined #gluster
13:25 kkeithley1 joined #gluster
13:42 caiozanolla joined #gluster
13:45 coredumb joined #gluster
13:46 coredumb Hello
13:46 glusterbot coredumb: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
13:47 coredumb is there no way to allow root squashing only for certain hosts with NFS ?
13:47 andreask1 joined #gluster
13:49 tdasilva joined #gluster
13:53 caiozanolla JoeJulian, a couple days ago you told me about replication made from the client, as such, I was able to saturate the servers. nice. One side effect though is I had one of the replicated servers gone offline, while the 1st one is still receiving files. Self healing happens from server to server, right? as such, if I just turn on the 2nd server back on after the initial load of files it should pickup where it left and have the same dataset after a whi
13:55 caiozanolla JoeJulian, if thats the case, what would be the behaviour if a cliente asks for a file that is not on the 2nd (using the fuse client)?
14:03 Alex___ joined #gluster
14:10 xleo joined #gluster
14:10 JoeJulian caiozanolla: The lookup function call contacts both servers to see if the file is healed before the rest of the file ops go through. That would start a background self-heal and you would read from the clean server (and would continue reading from the clean server until the fd is closed).
14:11 JoeJulian coredumb: nope
14:13 caiozanolla JoeJulian, nice, thanks again! what about the "write all to one node, let the second get healed afterwards", anything I should consider? (transfers are going 5x faster going to just 1 server) so im thinking about going this way.
14:15 _Bryan_ joined #gluster
14:16 coredumb JoeJulian: so basically if i wanna create directories as root ocasionnally, i should switch off root squshing, do my modification then switch it back on right ?
14:17 mortuar joined #gluster
14:18 wushudoin joined #gluster
14:18 ndk joined #gluster
14:18 bennyturns joined #gluster
14:25 anoopcs joined #gluster
14:30 jobewan joined #gluster
14:31 mojibakeumd joined #gluster
14:33 sputnik13 joined #gluster
14:35 R0ok_ joined #gluster
14:36 mojibakeumd Hello hello.. I will just jump right in. Looking to setup Gluster on some EC2 nodes in VPC. Reading through the documentation on setting up master node and peers for Volume Distribution and Replication, the instructions for mounting a client only include the master node, or a peer as a backup volume. How does the client know to talk to additional peers for better distributed performance?
14:41 caiozanolla jumping on mojibakeumd thead, how does it choose which peer it will talk to? is a "preferred peer" setting possible on the client?
14:44 glusterbot New news from newglusterbugs: [Bug 1122581] Sometimes self heal on disperse volume crashes <https://bugzilla.redhat.com/show_bug.cgi?id=1122581> || [Bug 1122586] Read/write speed on a dispersed volume is poor <https://bugzilla.redhat.com/show_bug.cgi?id=1122586>
14:46 JoeJulian @mount server
14:46 glusterbot JoeJulian: The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns
14:48 nbalachandran joined #gluster
14:51 JoeJulian mojibakeumd, caiozanolla: ^
14:52 JoeJulian caiozanolla: see "gluster volume set help" and look for read-subvolume, read-local, and read-hash-mode for various options.
14:54 JoeJulian caiozanolla: Letting it heal afterward is a valid choice if your remote performance is satisfactory during the unhealed time.
14:55 JoeJulian Later folks. Back to OSCON for me.
14:55 caiozanolla JoeJulian, thanks man! just what I wanted to hear!
14:56 mojibakeumd thank you glusertbot and joejulian.
15:05 lmickh joined #gluster
15:17 cultavix joined #gluster
15:31 sputnik13 joined #gluster
15:32 anoopcs joined #gluster
15:32 mortuar joined #gluster
15:34 andreask joined #gluster
15:47 jiffe98 anyone running nfs import of a local gluster fuse mount with 3.4?
15:50 R0ok_ joined #gluster
15:52 jiffe98 this seemed to work wonders with apache/php my old servers on 3.3 run at a load average of 0.3 while the new servers are running at a load average of 4
16:01 tyl0r joined #gluster
16:03 burn420 joined #gluster
16:03 burn420 hello
16:03 glusterbot burn420: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:05 burn420 I am planning on upgrading my gluster setup to 3.5 are there a lot of known issues with this upgrade does anyone know?
16:05 recidive joined #gluster
16:08 andreask joined #gluster
16:17 burn420 Now dev tells me they dont need 3.5!
16:17 burn420 lol
16:30 kryl joined #gluster
16:30 kryl hi
16:30 glusterbot kryl: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:30 kryl I try to use lsyncd (and it works locally) except if I try to sync a mounted source (it's mounted with -t glusterfs parameter) the kernel seems to desn't take care about the change in a mounted source, any idea please ?
16:30 Rafi_kc joined #gluster
16:38 sputnik13 joined #gluster
16:39 ndk joined #gluster
16:47 Peter4 joined #gluster
16:57 rjoseph joined #gluster
17:00 vshankar joined #gluster
17:00 anoopcs1 joined #gluster
17:00 theron joined #gluster
17:05 zerick joined #gluster
17:07 Philambdo joined #gluster
17:09 Rafi_kc joined #gluster
17:10 LeBlaaanc joined #gluster
17:11 LeBlaaanc Anyone aware that all the links on this page are broke? http://www.gluster.org/documentation/Getting_started_overview/
17:11 glusterbot Title: Gluster (at www.gluster.org)
17:15 doo joined #gluster
17:17 Loku joined #gluster
17:18 dtrainor joined #gluster
17:19 sputnik13 joined #gluster
17:20 jbrooks joined #gluster
17:22 Loku any documentation for gluster 3.5 autofs configuration ?
17:28 zerick joined #gluster
17:28 Rafi_kc joined #gluster
17:29 anoopcs joined #gluster
17:38 tyl0r joined #gluster
17:44 sputnik13 joined #gluster
17:45 _dist joined #gluster
17:48 theY4Kman What was the resolution to this thread? http://supercolony.gluster.org/pipermail/gluster-users/2014-April/039817.html
17:48 glusterbot Title: [Gluster-users] Gluster quota issue (at supercolony.gluster.org)
17:52 bjornar joined #gluster
17:52 mojibakeumd is there a command to covert the Type of volume, from lets say Distributed to Replica?
17:53 ramteid joined #gluster
18:05 elico joined #gluster
18:08 theron joined #gluster
18:10 sputnik13 joined #gluster
18:10 hagarth1 joined #gluster
18:11 hagarth2 joined #gluster
18:13 sputnik13 joined #gluster
18:13 portante joined #gluster
18:13 dblack joined #gluster
18:16 bala1 joined #gluster
18:16 mojibakeumd Do any additional commands need to be run to commit, or flush a delte volume command? I am trying to reuse the brick for a new volume, but I am getting error "volume create: rep-vol: failed: /export/gfs/shared is already part of a volume"
18:17 mojibakeumd gluster volume info returns there are no volumes.
18:18 tdasilva_ joined #gluster
18:18 _dist joined #gluster
18:18 hagarth joined #gluster
18:18 bala joined #gluster
18:19 rjoseph joined #gluster
18:19 Slashman joined #gluster
18:19 kkeithley joined #gluster
18:20 rwheeler_ joined #gluster
18:20 lkoranda joined #gluster
18:20 kkeithley joined #gluster
18:21 mojibakeumd Ahhh, lots like I kinda solved issue. Looks like I need to use a new name /export/gfs/sharednew. Meaning that delete volume did not clean that up.
18:22 sputnik13 joined #gluster
18:22 vshankar joined #gluster
18:23 mwoodson joined #gluster
18:23 MacWinner joined #gluster
18:23 julim joined #gluster
18:23 bfoster joined #gluster
18:23 JustinClift joined #gluster
18:24 dblack joined #gluster
18:24 rturk|afk joined #gluster
18:24 portante joined #gluster
18:25 radez_g0n3 joined #gluster
18:27 msvbhat joined #gluster
18:28 kkeithley joined #gluster
18:28 mkent joined #gluster
18:29 tyl0r joined #gluster
18:31 xleo joined #gluster
18:33 sputnik13 joined #gluster
18:38 ekuric joined #gluster
18:42 mkent hi #gluster! so i've got a working geo-replication setup on 3.3.1, but i'm having some issues with transfer speeds
18:43 rotbeard joined #gluster
18:43 mkent I can rsync a file off the gluster mount from master -> slave at 100MB/sec, but the rsync jobs that geo-replication spawned are trudging along at like 4MB/sec
18:43 mkent looks like my glusterfsd threads are using 100% of a cpu core
18:45 mkent i'm not clear what role glusterfsd is playing in limiting the throughput here
18:48 mojibakeumd Can someone explain why example documention setups up the mounts for the bricks in /export, which /mnt/ for the clients?
18:48 mojibakeumd while /mnt for the clients.
18:51 _dist mojibakeumd: it's a matter preference really where you bricks are
18:51 _dist but brick locations are on the servers, it's the FS that gluster will store it's data and xattr data on
18:52 _dist joined #gluster
18:52 _dist clients typically place their mount.glusterfs in /mnt/thisguy
18:53 _dist you never want to write directly to a brick, those files won't be picked up by the shd
18:57 vshankar joined #gluster
18:59 mojibakeumd thank you _dist.
18:59 mojibakeumd I found out about files not being picked up by writting in the brick. Still a learning experience.
19:06 julim_ joined #gluster
19:16 chirino joined #gluster
19:24 kryl ls: cannot access www: Input/output error < I have an error but I tryed to turn around but I'm lost, can you help me to deal with that please ?
19:25 chirino joined #gluster
19:25 georgeh|workstat joined #gluster
19:25 _dist kryl: can you tell me about your setup?
19:27 kryl I just created simple volumes from one master node without second master node
19:28 kryl here is the result on a slave node after mounting the rep
19:28 bala1 joined #gluster
19:28 kryl some directories are working and one is failed with this error
19:28 kryl (excuse my english)
19:29 _dist a little more detail please. what are the brick FSs for each brick, how many bricks, what type of volume, what version of glustefs. Are you using fuse mount or NFS, and is yoru ls error in the mounted location or a brick location?
19:31 kryl glusterfs 3.2.7 (wheezy)
19:31 andreask joined #gluster
19:31 kryl I mount a block on /data with xfs
19:32 kryl and I share some directories in /data with glusterfs like /data/extra
19:32 kkeithley_ @ppa
19:32 _dist ok, so your brick location is /data/extra
19:32 glusterbot kkeithley_: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
19:32 kryl yes
19:32 kryl _dist, on the server !
19:32 _dist and an ls in your brick directory works, but if you mount with mount.glusterfs and traverse the mounted directory it doesn't work?
19:33 kryl yes
19:33 _dist well 3.2.7 is old, but it should still work
19:34 kryl for example I mount this brick in an other server with success
19:34 kkeithley_ 3.2.7 is very old. You should get a newer version from download.gluster.org, e.g. http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.4/Debian/apt/dists/wheezy/
19:34 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/3.4.4/Debian/apt/dists/wheezy (at download.gluster.org)
19:34 kryl I can read all files and directories except www
19:34 _dist can you dpaste your a "gluster volume status" and a "mount" ?
19:34 kryl very old ?
19:34 _dist kryl: could be permissions
19:35 kryl /dev/xvdg1 on /data type xfs (rw,noatime,attr2,delaylog,nobarrier,logbufs=8,logbsize=256k,noquota)
19:35 kryl it's a new mount I was working with ext4 just before with the same error
19:35 _dist dpaste preferred https://dpaste.de/ <-- a gluster volume info or status will be large
19:35 glusterbot _dist: <'s karma is now -1
19:35 glusterbot Title: dpaste.de: New snippet (at dpaste.de)
19:36 kkeithley_ or http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.1/Debian/apt/dists/
19:36 glusterbot Title: Index of /pub/gluster/glusterfs/3.5/3.5.1/Debian/apt/dists (at download.gluster.org)
19:36 kkeithley_ yes, very old
19:36 kryl Brick1: storage:/data/extra / Type: Distribute / Status: Started / Number of Bricks: 1 / Transport-type: tcp
19:38 kryl kkeithley, http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/README < are you sure it's necessary ? it's the stable version of debian ;)
19:38 kryl _dist, one sec
19:39 bene2 joined #gluster
19:40 kryl _dist, https://dpaste.de/ozRc
19:40 glusterbot Title: dpaste.de: Snippet #276260 (at dpaste.de)
19:40 _dist it's only 1 directory you get RW errors on ?
19:42 _dist is that directory accessible in the brick (not the mount?) and are you sure it isn't something strange like a symbolic link on a different FS type?
19:42 kryl on the server
19:42 kryl when I can use the directory without any pb
19:44 kryl _dist, https://dpaste.de/6mTC
19:44 k3rmat left #gluster
19:44 glusterbot Title: dpaste.de: Snippet #276261 (at dpaste.de)
19:44 uebera|| joined #gluster
19:44 uebera|| joined #gluster
19:47 _dist I can't see how, but my gut feel is it's a permission issue. That mount syntax is different than I'm acustomed to. I'm just making guesses here
19:48 _dist kryl: also, you pretty much _need_ to create all directories through the gluster client, you can't start a new volume where data exists (it'll never be included in the volume)
19:48 _dist (again I'm just guess you might have done that)
19:49 kryl yes I did
19:49 kryl I have a directory with data and I create the volume on it
19:49 _dist yeah you can't do that unfortunately
19:49 kryl I add/change/move data in both the client & the server !
19:50 kryl oups
19:50 kryl but it seems to works :)
19:50 kryl except this directory !
19:50 _dist you'd have to create a new blank directory as the brick, then copy your data through the mount (not directory on the brick)
19:50 kryl right now it worls
19:50 kryl I copied the www on /tmp
19:50 kryl I create a new www directory (on the server)
19:50 kryl and copy back /tmp/www/* to www
19:50 _dist it might _appear_ to work, but when you add a new replica it won't copy the data cause it'll be missing all the xattr stuff
19:50 kryl and it works now ...
19:51 kryl I can read all the data on a slave node
19:51 kryl looks curious
19:51 kryl ok ...
19:52 kryl I miss that in the doc :-(
19:52 _dist no worries, honestly I think the volume create should add xattr data to existing files, and the volume delete should remove it. But neither actually happen :)
19:52 kryl so I have to create the empty directories on the master, start a volume on it
19:52 kryl and mount them before to copy ?
19:52 _dist yeap, that's the safest way
19:53 _dist there are tricks to add xattr data manually, I've not done it myself
19:53 kryl ok
19:53 kryl what about changing to a fresh version ?
19:53 _dist you mean a newer version of gluster? yeah I recommend that
19:53 kryl I'm not sure it's a good idea to use extra repo if it's not really necessary
19:54 kryl I'll have to do that on clients & server of course...
19:54 _dist yeap, but I'd recommend it
19:54 _dist if you can trust the gluster code, you can probably trust their debian repo
19:55 _dist but you could always compile it yourself if you want to be super careful
19:55 _dist personally I find that painful, even with checkinstall
19:56 kryl it's not a security question it's just about management
19:56 kryl If I use a stable version and begin to add a lot of extra repo for each services maybe it's better to consider to use an other linux system ? :)
19:57 tyl0r joined #gluster
20:13 kryl No volumes present < after upgrade :)
20:13 kryl woops lol
20:16 _disty4343 joined #gluster
20:16 _disty4343 it's a big upgrade, you might need to recreate your volume
20:17 _dist you'll also notice a .glusterfs directory in your bricks now
20:19 Loku gluster3.5 + autofs -- -o backupvolfile-servers not working
20:19 Loku anyone have tried ?
20:19 bene2 joined #gluster
20:20 Loku sorry - its option backup-volfile-servers=
20:21 kryl _dist, ok
20:24 StarBeast joined #gluster
20:26 kryl is it possible to add server node after ?
20:26 kryl or do I need to think to do it now ?
20:28 _dist you can do it after
20:28 _dist but depending on the layout you want, you might want to do it at the start (distribute really) since rebalance is expensive
20:28 bennyturns Loku, the name changed
20:36 m0zes joined #gluster
20:37 sputnik13 joined #gluster
20:49 julim joined #gluster
20:50 sputnik13 joined #gluster
20:52 sputnik13 joined #gluster
21:26 dtrainor joined #gluster
21:34 kkeithley1 joined #gluster
21:43 siel joined #gluster
21:44 hagarth joined #gluster
21:46 kryl joined #gluster
21:55 tyl0r joined #gluster
21:55 tyl0r left #gluster
22:06 siel joined #gluster
22:08 gehaxelt Anybody around?
22:09 gehaxelt I added a new brick (replica) to a volume, but the data does not sync/be copied to the new brick
22:12 gehaxelt okay, had to mount it and a "du -sh . " transfers the data now.
22:17 Peter4 help!!! gluster command hang…while trying to stop a volume
22:17 Peter4 no gluster volume command able to run at this moment...
22:17 Peter4 is there a way to remove the lock?
22:24 mkent left #gluster
22:29 gehaxelt Peter4, restart glusterfs?
22:30 Peter4 ya i just did
22:30 Peter4 everything came back…
22:30 Peter4 but why would a stop on volume hang the whole thing?
22:30 Peter4 nothing on the log mentioned
22:30 Peter4 and wonder how can we tell what hangs the gluster
22:39 natgeorg joined #gluster
22:40 cmtime Peter4 I have seen it happen when one of my 12 nodes is hung the rest hang
22:40 cmtime so I had to do a restart on all of them one at a time to clear it.
22:40 Peter4 ic
22:40 Peter4 good to know
23:01 AaronGr joined #gluster
23:03 atrius joined #gluster
23:03 JoeJulian Management operation should not hang bricks/volumes! That's a bug! Find it and file it!
23:04 Peter4 file a bug
23:04 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:15 atrius joined #gluster
23:36 theron joined #gluster
23:39 zerick joined #gluster
23:45 cyberbootje joined #gluster
23:46 glusterbot New news from newglusterbugs: [Bug 1122732] remove volume hang glustefs <https://bugzilla.redhat.com/show_bug.cgi?id=1122732>
23:46 theron joined #gluster
23:47 theron joined #gluster
23:57 bala joined #gluster
23:59 m0zes joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary