Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 plarsen joined #gluster
00:11 plarsen joined #gluster
00:25 RicardoSSP joined #gluster
00:34 T3 joined #gluster
00:37 gildub joined #gluster
01:14 side_control joined #gluster
01:22 Guest83936 joined #gluster
01:32 6A4ABCM3E joined #gluster
01:35 glusterbot News from newglusterbugs: [Bug 1185950] adding replication to a distributed volume makes the volume unavailable <https://bugzilla.redhat.co​m/show_bug.cgi?id=1185950>
01:46 plarsen joined #gluster
02:08 T3 joined #gluster
02:09 cyberbootje joined #gluster
02:10 nangthang joined #gluster
02:12 jmarley joined #gluster
02:25 hflai joined #gluster
02:25 jaank joined #gluster
02:25 T3 joined #gluster
02:41 _Bryan_ joined #gluster
02:55 fyxim joined #gluster
03:01 T3 joined #gluster
03:06 dgandhi joined #gluster
03:23 suman_d joined #gluster
03:42 anrao joined #gluster
03:43 itisravi joined #gluster
03:45 kshlm joined #gluster
03:45 bala joined #gluster
03:59 hflai joined #gluster
04:06 suman_d joined #gluster
04:06 RameshN joined #gluster
04:07 spandit joined #gluster
04:09 atinmu joined #gluster
04:10 sakshi joined #gluster
04:11 shubhendu joined #gluster
04:13 jobewan joined #gluster
04:14 shylesh__ joined #gluster
04:15 kshlm joined #gluster
04:20 bala joined #gluster
04:24 gildub joined #gluster
04:36 jiffin joined #gluster
04:39 rafi joined #gluster
04:43 suman_d joined #gluster
04:48 anoopcs joined #gluster
04:50 rjoseph joined #gluster
04:54 ndarshan joined #gluster
04:54 aulait joined #gluster
04:57 samsaffron___ joined #gluster
04:58 suman_d_ joined #gluster
05:01 mikedep333 joined #gluster
05:03 meghanam joined #gluster
05:06 sputnik13 joined #gluster
05:08 ppai joined #gluster
05:12 dusmant joined #gluster
05:22 anrao joined #gluster
05:26 maveric_amitc_ joined #gluster
05:26 gem joined #gluster
05:35 prasanth_ joined #gluster
05:37 Manikandan joined #gluster
05:38 FrankLu joined #gluster
05:43 sputnik13 joined #gluster
05:43 quantum Hi! I use glusterFS for cinder backend. i have 2 compute node. How i can megrate instance witch boot-from-volume if first compute node fails?
05:44 quantum if i change in mysql nova tables, update 'node1' to 'node2' this will be work?
05:44 quantum sorry, wrong IRC channel :)
05:46 anrao joined #gluster
05:47 overclk joined #gluster
05:48 kdhananjay joined #gluster
05:49 nangthang joined #gluster
05:50 ramteid joined #gluster
05:51 stickyboy joined #gluster
05:53 atalur joined #gluster
05:57 vikumar joined #gluster
05:57 jiffin joined #gluster
05:59 soumya__ joined #gluster
06:03 T3 joined #gluster
06:09 ricky-ti1 joined #gluster
06:13 nbalacha joined #gluster
06:14 raghu joined #gluster
06:27 anil joined #gluster
06:29 nshaikh joined #gluster
06:32 suman_d joined #gluster
06:36 glusterbot News from newglusterbugs: [Bug 1181669] File replicas differ in content even as heal info lists 0 entries in replica 2 setup <https://bugzilla.redhat.co​m/show_bug.cgi?id=1181669>
06:36 glusterbot News from resolvedglusterbugs: [Bug 1152956] duplicate entries of files listed in the mount point after renames <https://bugzilla.redhat.co​m/show_bug.cgi?id=1152956>
06:36 glusterbot News from resolvedglusterbugs: [Bug 1101143] link file under .glusterfs directory not found for a directory <https://bugzilla.redhat.co​m/show_bug.cgi?id=1101143>
06:43 karnan joined #gluster
06:44 elico joined #gluster
06:46 atinmu joined #gluster
06:59 soumya__ joined #gluster
07:04 sputnik13 joined #gluster
07:10 stickyboy joined #gluster
07:15 jtux joined #gluster
07:23 ivok joined #gluster
07:25 atinmu joined #gluster
07:27 vikumar joined #gluster
07:28 sputnik13 joined #gluster
07:30 kovshenin joined #gluster
07:38 elico joined #gluster
07:39 devilspgd joined #gluster
07:40 sputnik13 joined #gluster
07:42 Philambdo joined #gluster
07:48 sputnik13 joined #gluster
07:51 nbalacha joined #gluster
07:53 sputnik13 joined #gluster
08:03 harish joined #gluster
08:04 kkeithley1 joined #gluster
08:05 tanuck joined #gluster
08:06 smohan joined #gluster
08:07 Guest83936 joined #gluster
08:09 lalatenduM joined #gluster
08:18 tdasilva joined #gluster
08:20 Pupeno joined #gluster
08:24 ivok joined #gluster
08:26 sputnik13 joined #gluster
08:27 hagarth joined #gluster
08:30 mbukatov joined #gluster
08:30 ghostpl joined #gluster
08:31 atalur_ joined #gluster
08:32 atalur joined #gluster
08:33 ricky-ticky1 joined #gluster
08:42 bjornar joined #gluster
08:43 rwheeler joined #gluster
08:45 shylesh__ joined #gluster
08:53 liquidat joined #gluster
08:59 kdhananjay joined #gluster
09:05 _tziOm joined #gluster
09:07 glusterbot News from resolvedglusterbugs: [Bug 1188547] invalid argument iobuf <https://bugzilla.redhat.co​m/show_bug.cgi?id=1188547>
09:11 Norky joined #gluster
09:16 shubhendu joined #gluster
09:18 ndarshan joined #gluster
09:21 sputnik13 joined #gluster
09:24 sputnik1_ joined #gluster
09:24 tanuck joined #gluster
09:24 tanuck_ joined #gluster
09:27 doekia joined #gluster
09:33 awerner joined #gluster
09:34 atalur joined #gluster
09:43 DV joined #gluster
09:47 deniszh joined #gluster
09:47 T0aD joined #gluster
09:48 atinmu joined #gluster
09:52 ivok joined #gluster
09:57 kdhananjay joined #gluster
09:57 mbukatov joined #gluster
09:57 sputnik13 joined #gluster
09:59 ralala joined #gluster
10:03 sputnik13 joined #gluster
10:11 t0ma joined #gluster
10:12 t0ma yo! i removed my old volumes but now the gluster daemons are saying that 49154 is the port to use
10:12 ndarshan joined #gluster
10:12 t0ma how do I reset the volumes to use 49152-53
10:13 ricky-ticky joined #gluster
10:19 masterfix joined #gluster
10:21 t0ma no matter, i found it
10:23 Manikandan joined #gluster
10:32 mbukatov joined #gluster
10:34 shubhendu_ joined #gluster
10:37 glusterbot News from resolvedglusterbugs: [Bug 865734] Add configure support for adding userspace probes in systemtap <https://bugzilla.redhat.com/show_bug.cgi?id=865734>
10:38 sputnik13 joined #gluster
10:41 rafi joined #gluster
10:44 rafi1 joined #gluster
10:47 soumya joined #gluster
10:54 ivok joined #gluster
10:58 meghanam joined #gluster
10:59 atinmu joined #gluster
11:16 soumya joined #gluster
11:20 vikumar joined #gluster
11:22 vikumar joined #gluster
11:36 overclk joined #gluster
11:44 diegows joined #gluster
12:04 overclk joined #gluster
12:05 itisravi joined #gluster
12:05 telmich joined #gluster
12:11 jdarcy joined #gluster
12:12 RameshN joined #gluster
12:24 elico joined #gluster
12:27 awerner joined #gluster
12:33 kovshenin joined #gluster
12:34 ira joined #gluster
12:34 ira -17C is not good for me.... that's today's thought.
12:35 ira Missend :)
12:35 kovshenin joined #gluster
12:39 kovshenin joined #gluster
12:41 [Enrico] joined #gluster
12:49 jvandewege_ joined #gluster
12:52 chirino_m joined #gluster
12:52 Ramereth1home joined #gluster
12:58 kbyrne joined #gluster
12:58 atalur joined #gluster
13:07 jmarley joined #gluster
13:08 diegows joined #gluster
13:13 xavih joined #gluster
13:13 malevolent joined #gluster
13:18 rafi joined #gluster
13:18 anrao joined #gluster
13:36 Philambdo joined #gluster
13:40 smohan joined #gluster
13:42 hagarth joined #gluster
13:43 TvL2386 joined #gluster
13:44 kkeithley1 joined #gluster
13:47 B21956 joined #gluster
13:47 B21956 joined #gluster
13:49 rwheeler joined #gluster
13:49 prasanth_ joined #gluster
13:53 DV joined #gluster
13:56 Gill joined #gluster
14:07 nbalacha joined #gluster
14:12 kkeithley1 joined #gluster
14:13 virusuy joined #gluster
14:13 virusuy joined #gluster
14:14 awerner joined #gluster
14:20 wkf joined #gluster
14:25 xavih joined #gluster
14:30 tdasilva joined #gluster
14:38 glusterbot News from resolvedglusterbugs: [Bug 1067059] Support for unit tests in GlusterFS <https://bugzilla.redhat.co​m/show_bug.cgi?id=1067059>
14:43 kshlm joined #gluster
14:43 kshlm joined #gluster
14:44 LebedevRI joined #gluster
14:51 kovshenin joined #gluster
14:54 dgandhi joined #gluster
14:59 ivok joined #gluster
15:01 plarsen joined #gluster
15:05 kshlm joined #gluster
15:13 bennyturns joined #gluster
15:14 haomaiwa_ joined #gluster
15:20 ricky-ti1 joined #gluster
15:21 wushudoin joined #gluster
15:24 suman_d joined #gluster
15:24 calisto joined #gluster
15:26 haomaiwa_ joined #gluster
15:33 nangthang joined #gluster
15:41 kovsheni_ joined #gluster
15:45 rcampbel3 joined #gluster
15:47 drankis joined #gluster
15:50 ec2-user_ joined #gluster
15:51 nangthang joined #gluster
15:53 diegows_ joined #gluster
15:54 jobewan joined #gluster
15:58 ec2-user_ joined #gluster
15:58 Philambdo joined #gluster
15:59 hagarth joined #gluster
16:15 diegows joined #gluster
16:23 lpabon joined #gluster
16:23 anrao joined #gluster
16:25 jbrooks joined #gluster
16:29 shubhendu_ joined #gluster
16:33 drankis joined #gluster
16:34 kshlm joined #gluster
16:37 ricky-ticky2 joined #gluster
16:41 drankis joined #gluster
16:45 ralalala joined #gluster
16:46 Philambdo joined #gluster
16:46 _dist joined #gluster
16:48 rwheeler joined #gluster
16:50 T3 joined #gluster
16:54 fsimonce joined #gluster
17:00 drankis joined #gluster
17:02 suman_d joined #gluster
17:05 tanuck joined #gluster
17:10 T3 joined #gluster
17:10 T3 joined #gluster
17:14 nishanth joined #gluster
17:14 diegows joined #gluster
17:30 bala joined #gluster
17:33 gem joined #gluster
17:34 jmarley joined #gluster
17:36 suman_d joined #gluster
17:44 jbrooks joined #gluster
17:53 Philambdo joined #gluster
17:57 Rapture joined #gluster
18:02 sputnik13 joined #gluster
18:08 masterfix left #gluster
18:11 victori joined #gluster
18:13 neofob joined #gluster
18:17 jobewan joined #gluster
18:20 Ramereth joined #gluster
18:30 _dist joined #gluster
18:38 rcampbel3 joined #gluster
18:47 kanagaraj joined #gluster
19:20 lmickh joined #gluster
19:22 Cenbe Simple test: two servers, one brick on each, replicated. To create the trusted pool, do I probe each server from the other, or only the second one from the first?
19:23 Nicolas_22 joined #gluster
19:24 semiosis Cenbe: ,,(hostnames)
19:25 glusterbot Cenbe: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
19:25 Cenbe thanx
19:25 semiosis yw
19:26 jobewan joined #gluster
19:27 DV joined #gluster
19:40 Nicolas_22 left #gluster
19:44 jbrooks joined #gluster
19:52 redbeard joined #gluster
19:55 ivok joined #gluster
19:59 krullie joined #gluster
20:01 rcampbel3 Hi everyone. I'm locking down my port access list for gluster in AWS using VPC and separate client and server security group lists. Can those with more knowledge than me review this and let me know if I'm missing anything? Also, there are some questions inline. Thanks in advance! http://pastebin.com/aHBxVtBg
20:01 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:04 JoeJulian @ports
20:04 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
20:06 rcampbel3 @JoeJulian - that's the basis for above paste, and it generated a lot of other questions for my use case where I'm separating out client access from server access
20:06 neofob left #gluster
20:07 bene2 joined #gluster
20:08 T3 joined #gluster
20:08 badone joined #gluster
20:10 semiosis rcampbel3: you are certainly not using RDMA in EC2
20:10 semiosis all you get is ethernet
20:11 semiosis rcampbel3: you only need to allow NFS ports if you are using NFS clients.  if you're using FUSE clients you can safely ignore anything to do with NFS
20:11 rcampbel3 semiosis: Right, but there is RDMA over ethernet... I wasn't sure if that might be involved
20:11 JoeJulian rcampbel3: http://fpaste.org/181078/22994299/
20:11 semiosis you'd know if you were using RDMA over ethernet
20:11 rcampbel3 semiosis: so, using FUSE, I don't need NLM as well?
20:12 semiosis dont need NLM with FUSE clients
20:12 semiosis rcampbel3: in fact, in EC2 you don't even get ethernet, you get IP
20:21 joshin joined #gluster
20:21 joshin joined #gluster
20:25 deniszh joined #gluster
20:25 ivok joined #gluster
20:27 hagarth joined #gluster
20:27 Philambdo joined #gluster
20:28 T3 joined #gluster
20:59 matt_ joined #gluster
21:02 diegows joined #gluster
21:08 daMaestro joined #gluster
21:16 tanuck joined #gluster
21:21 Philambdo joined #gluster
21:28 Pupeno_ joined #gluster
21:30 cornfed78 joined #gluster
21:43 rcampbel3 joined #gluster
21:50 sputnik13 joined #gluster
21:54 blue_vd joined #gluster
21:55 blue_vd hello
21:55 glusterbot blue_vd: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:02 blue_vd so i have an issue : i am testing gluster; I installed gluster on 3 nodes and I am at the point where i create various types of volumes and making tests with qemu/kvm on them. My problem is that if I create multiple bricks / node (but on the same fileysstem) at some point volume creation fails with " /mnt/g0/e is already part of a volume" however,mnt/g0/e is DEFINITELY NOT part of any other volume .However, /mnt/g0/a or /mnt/g0/b or /mnt/g0/c are indeed part
22:02 blue_vd of other volumes. From what i read it is not required to have only one brick per filesystem, so my scenario should work. I am using gluster glusterfs-3.6.2-1.el6.x86_64 on centos6
22:04 blue_vd So,question is : am I missing something ?
22:07 JoeJulian blue_vd: What's the command line you're attempting?
22:08 blue_vd gluster volume create volrep replica 2 test-gluster-01:/mnt/g0/e/ test-gluster-02:/mnt/g0/e/
22:08 blue_vd volume create: volrep: failed: /mnt/g0/e is already part of a volume
22:09 glusterbot News from resolvedglusterbugs: [Bug 838784] DHT: readdirp goes into a infinite loop with ext4 <https://bugzilla.redhat.com/show_bug.cgi?id=838784>
22:09 sputnik13 joined #gluster
22:12 partner blue_vd: stepping into Joes boots, has neither of the bricks been part of any volume ever? gluster does leave its marks on them as a safety precaution which you will need to clear out first before you can reuse an old brick
22:13 JoeJulian ~pasteinfo | blue_vd
22:13 glusterbot blue_vd: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:14 partner while you say it has not been definately part of the previous volume, just doublecheck this isn't the case, perhaps just in case clear the attributes of the bricks aswell
22:15 partner i must agree the error sucks, it does not state which exactly brick is involved.. i had four, or eight of them similarly named...
22:15 partner i actually have 16 of them myself, fsck if that would be the error..
22:16 blue_vd https://dpaste.de/a5Oj
22:19 blue_vd also check this other one
22:20 blue_vd https://dpaste.de/AkNr : i am creating a "new" directory and submit the create command, it initially does NOT complain about the newly created directory but complains about the brick in the second node. After a few minutes I retry the very same command and it again complains about the first node with the "new" directory
22:21 blue_vd it's almost like any new directory gets tainted after some moments and cannot be used any more
22:22 blue_vd should I create two directories "newer" on both nodes i think the volume creation would succeed if it would be executed very very shortly after creating those directories
22:22 blue_vd filesystems involved are xfs if it matters
22:22 blue_vd also set selinux to permissive
22:25 partner try it out? at least then we would know? there is nothing taking over those directories so time is not the issue
22:25 blue_vd sure, just a moment
22:26 partner the point is, if any directory has been ever part of any volume ie. as a brick, the gluster has left its marks there and it cannot be used as is (the attributes needs to be cleared)
22:26 JoeJulian @path or prefix
22:26 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/
22:29 blue_vd yes, but in this case none of the directories were part of a previous volume
22:29 blue_vd https://dpaste.de/5fkq
22:29 partner then it sounds like a bug of some sort
22:29 blue_vd as suspected, if i submit volume create short enough after directory creation everything works fine
22:31 partner can't see anything wrong above paste
22:31 blue_vd the above shows that if i ran create short enough after directory creation it works
22:32 blue_vd while the previous paste shows two identical invocations one that does not complain about directory "new" the other does (a few minutes later)
22:32 partner ok, do it again with long enough sleep in between?
22:33 partner there is nothing that comes to my mind that would explain the behaviour you're describing, gluster just isn't crawling any dirs on its own
22:33 blue_vd this is what i'm trying now, but i'm not sure i'll be able to hit just the right ammount from the first try
22:34 blue_vd mkdir evennewer; ssh root@test-gluster-02 "mkdir /mnt/g0/evennewer"; ls -l ; ssh root@test-gluster-02 "ls -l /mnt/g0"; date
22:34 blue_vd now lets wait a few minutes
22:35 partner btw i'm glad you are putting the effort into the issue and just not leaving with frustration, this is the way to solve the issue, whatever it turns out to be
22:36 eclectic_ Hello, I have the worst question of the year.  Does it in a 4 node (2 replica) setup  with both client and server running on each matter if: the client connects locally or to a remote node?
22:37 blue_vd well, it's my wish to have it work, if i make it work as expected i plan to use it more extensively, but first i have to see if it works and performs as i hope it would do (i would use it for virtual machines storage)
22:38 partner blue_vd: for what its worth i'm aiming to reach the petabyte class with my gluster storage and i'm not far away, have lots of volumes of all sorts (dist, dist+repl, repl-2, repl-3)
22:39 blue_vd hm, this time it worked. Ok, next scenario : would it be possible that if first i give it a wrong parameter or if only one of the bricks was in a volume at some point the system "touches" all the tentative bricks and a further try on the same brick would fail even on the bricks that were not actually ever included in a volume but were attempted to be included in one ? Some failed rollback for ex ?
22:40 partner blue_vd: as i expected..
22:40 blue_vd lets try this one, too
22:40 partner 00:26 < partner> the point is, if any directory has been ever part of any volume ie. as a brick, the gluster has left its marks there and it cannot be used as is (the attributes needs to be cleared)
22:40 partner i mean, *ever*
22:41 partner there are attributes set that are not visible unless you explicitly see for them
22:41 JoeJulian ~mount server | eclectic_
22:41 glusterbot eclectic_: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
22:42 blue_vd partner, yes, but it was not, it was just an attempt that failed. the volume never started its existence because initially its creation failed.
22:42 partner blue_vd: Joe gave you a link that perfectly describes how the attributes are used and how they are cleared from a given directory/mount/$path
22:43 partner blue_vd: i cannot say if the gluster made any of its tricks on the bricks involved at that time, please see the post above, check the attributes
22:43 partner they really mean a lot to gluster so check them out
22:43 JoeJulian ... btw... I completely agree with your frustration that a failed attempt at creating a volume *still* leaves behind these attributes.
22:44 jobewan joined #gluster
22:44 blue_vd on the other hand i can see that i do not need to create the directories by myself, if i name a brick to a path that does not exist gluster creates it itself
22:45 partner JoeJulian: so that's a known issue?
22:46 blue_vd so this may be a better way of doing it, at least gluster creates them right when  they are needed
22:46 JoeJulian Pretty sure I filed a bug report on that ages ago.
22:46 blue_vd :D
22:46 partner JoeJulian: that's what you said about logrotation, too ;) (or a fix of it.. :)
22:46 JoeJulian But... you know... new features are more fun.
22:46 partner but, nevertheless, sucks and confuses the uses big time
22:47 partner hence i instructed towards the attributes
22:48 partner blue_vd: it will create them instantly. just, as seen above, be sure the bricks have not been part of any volume, obviously (i guess) succesful or non-succesful operation
22:48 blue_vd well, i'll have to get creative at brick names to be sure i will not land with a name that was tried befoer
22:49 partner blue_vd: perhaps get into deeper while at it, check the attributes and clear them, Joe gave you link
22:49 partner its all that matters to gluster
22:50 blue_vd I actually tried the attribute clearing thingy, and it seemed to work. The mistery was why the heck were those attributes appearing again since i never succeeded in including that directory in a volume
22:50 partner IMO this has been a great excrcise into internals of gluster
22:51 JoeJulian <sigh>
22:51 blue_vd and it seems that those were set by further failed attempts to include that certain brick in a volume
22:51 partner blue_vd: possibly due to what Joe said, a failed attempt just *might* leave some attributes behind, i don't know for sure
22:51 JoeJulian bug 835494
22:51 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=835494 medium, medium, ---, bugs, CLOSED DEFERRED, Volume creation fails and gives error "<brickname> or a prefix of it is already part of a volume", eventhough that brick is not part of any volume.
22:51 JoeJulian Guess the bug wasn't specific to the desired fix.
22:52 partner ..
22:52 blue_vd that bug description sounds familiar
22:54 blue_vd this deserves a section in the FAQ section
22:54 partner blue_vd: IMO we can conclude your case to the fact some of the bricks were part of failed volume creation attempt that left attributes to the bricks you again tried to use?
22:54 blue_vd yup
22:55 partner i bet Joe is busy background hitting a new bug report in
22:56 partner blue_vd: but as for you, i guess the resolution is satisfactory and you now know how to cope with it?
22:57 blue_vd bricks are left "dirty" by previous failed volume creation attempt. It can be quite frustrating i think, because most likely even other mistakes in the creation command (some syntax or some wrong hostname) would let otherwise good bricks dirty and unusable, one would probably need to actually cleanup the bricks he wants to use if one does not get the command right from the first attempt
22:58 partner that would need to be tested. what was your version of glusterfs again?
22:58 blue_vd partner, it is very satisfactory for me to know what happened and why, now i can work around it. But it would be very nice to have it fixed because it has the potential to frustrate in situations like the above
22:58 blue_vd moment
22:58 blue_vd glusterfs-3.6.2-1.el6.x86_64
22:58 partner i understand and that should not happen in the first place so its a bug
22:58 partner thanks
22:59 plarsen joined #gluster
23:00 blue_vd partner, further info, it seems that not any failed attempt leaves dirty bits behind
23:00 blue_vd i'll try to dig deeper into this
23:00 JoeJulian I think that's just a chicken and egg thing. It depends on what actually completed before one of the glusterd reported the failure.
23:03 partner sounds like a viable option
23:03 T3 joined #gluster
23:04 blue_vd yup. hard to follow this one,but maybe some better way to treat this could be created. Something like "detected brick already part of  volume NAME". Volume NAME is not currently known to gluster, would you like to clear old metadata ( y/n)
23:04 partner at least the hostname should be given, that is not included on the current error message
23:04 T3 joined #gluster
23:04 blue_vd or "volume NAME IS currently known to gluster, we cannot even offer to cleanup metadata"
23:04 siel joined #gluster
23:05 partner 00:08 < blue_vd> volume create: volrep: failed: /mnt/g0/e is already part of a volume
23:05 blue_vd yes, but what volume ? :)
23:05 partner same volume of course but which host
23:05 partner oh
23:05 blue_vd actually the host it says somewhere
23:06 partner that is what you pasted
23:06 blue_vd but the volume name would be cool to be showed
23:06 JoeJulian Could actually be another volume, too.
23:06 partner it might not know the volume name anymore but i agree the more it can say the more admin can make choices easily
23:06 JoeJulian But it doesn't necessarily know the volume name, just the id.
23:06 blue_vd because that volume name could be crosschecked with the currently known volumes and offer the user the choice of automatically erasing metadata if the volume is no longer registered by gluster
23:07 blue_vd yes, or some id
23:07 JoeJulian Unless it was formerly part of a volume that no longer exists.
23:07 partner i fully agree it coult be more verbose there
23:07 partner some lazy (as me) might use old LV for another use
23:08 partner but that will be the first error to encounter
23:08 blue_vd if it was part of a volume that no longer exists we can offer the user the choice to erase previous metadata, like we can mdadm --zero-superblock with block devices for raid arrays
23:08 partner but as said, its a safety measurement aswell
23:08 blue_vd yup
23:08 JoeJulian I've just been fighing a problem like that with salt. I do a cleanup, deleting an lv. Next highstate I create the lv again and it hangs because the filesystem is detected and lvcreate is stupidly waiting for input.
23:09 partner its not that hard to clear the attributes thought that part could be also documented better
23:09 gildub joined #gluster
23:09 JoeJulian Should be documented in the error message at the very least.
23:10 glusterbot News from newglusterbugs: [Bug 1188886] volume extended attributes remain on failed volume creation <https://bugzilla.redhat.co​m/show_bug.cgi?id=1188886>
23:10 JoeJulian "Extended attribute trusted.glusterfs.volume-id exists on %s. Please ensure the brick is ready for use and remove the attribute before trying again."
23:11 blue_vd severity should not be high, i think :)
23:11 partner there you go, copypasted to bz already?-) i'm too tired, i must fail now
23:12 JoeJulian I disagree. It causes user confusion and hinders adoption. If triage disagrees, so be it.
23:12 Cenbe Simple test: two servers, one brick on each, replicated. I want to create a qemu storage pool from the volume.
23:12 Cenbe Thinking of high availability, what happens if one of the two servers goes down and it's the one that was specified as the hostname for the qemu pool?
23:12 blue_vd ohwell so be it then
23:12 JoeJulian @mount server
23:12 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
23:13 Cenbe OK, thanks. Saw that earlier, but wanted to be sure it applied to this case.
23:13 JoeJulian Only time it doesn't is if you mount with nfs.
23:13 blue_vd sleep time now, thank you for help guys. I'll linger the following days, as I may hit in some other issues as I test things.
23:13 JoeJulian Or, of course, if you don't replicate.
23:13 blue_vd thanks again, have a nice night/workday, as applies :)
23:13 JoeJulian Any time, blue_vd.
23:13 partner blue_vd: night, heading to bed aswell
23:15 partner ..trying, should be up in 6 or so hours to go and hear/plan what i'm going to do next with the company..
23:15 Cenbe FWIW, I agree that the issue you guys were discussing is a serious one. It tripped me up when I first looked at gluster about a year ago.
23:15 Cenbe (about the attributes)
23:16 JoeJulian Sleep well, partner. :D
23:16 partner JoeJulian: i probably can't, its the judgement day(s) for gluster on my behalf i'd think :)
23:17 JoeJulian I doubt you'll get away from it quite that easy. They'll move you and then start calling you all the time for help.
23:17 RicardoSSP joined #gluster
23:17 RicardoSSP joined #gluster
23:17 partner i'm burning the bridges best i can....
23:18 partner but, seriously i've expressed my 2+ year investment on this topic and desire to continue developing that front further, lets see
23:19 JoeJulian Let them move you, then when they realize how much they need you you'll be in just the right spot for more money.
23:20 partner nah, that won't work, you know why? the gluster is perfectly set up and will run on its on for many years without anybody touching.. :)
23:20 JoeJulian ... and don't comment on that. This channel's logged. ;)
23:22 partner haha, i know. its not really about knowing the "gluster volume foo..", its the things around it
23:23 partner everybody can install say vmware esxi and get vm's running there, or kvm/qemu/.... its the thing after that
23:25 partner which reminds, i made a small proto on "glusterfs on sd" as all the current hardware comes with sd option to boot up from, similarly to as all esx and suchs are currently, no local harddisks
23:27 partner only difference being all the harddisks are data disks, OS on the SD card but that is kind of stupid, there's couple of gigs of space for the os always..
23:27 partner or kind of not but given the amount of logs and crap, there's plenty of issues to sort out
23:28 JoeJulian cool
23:29 jaank joined #gluster
23:30 Pupeno joined #gluster
23:31 rcampbel3 partner: why not stateless PXE boot gluster server ;)
23:31 partner rcampbel3: because i've done that already, to some degree, its all just for fun so far anyways :)
23:32 rcampbel3 Aah... my friend calls those exercises, "practicing your kung-fu" ;)
23:32 partner i'm blessed with servers with dedicated mirror for OS so that is my boring life today
23:32 JoeJulian stateless pxe boot gluster would be way easier than the stateless* pxe boot ceph I've just done.
23:33 partner JoeJulian: maybe coypaste your findings here as i'm kind of strongly turning into ceph here and repeating all good and bad ideas on that front aswell?-D
23:33 partner only then i can "known"
23:33 JoeJulian Blog posts coming up.
23:34 partner grrrreat
23:34 JoeJulian Just have to run them by legal first. :(
23:34 partner well, send me an email in couple of years or so once its out?-)
23:34 JoeJulian Hehe
23:35 rcampbel3 Question: what's downside of enabling SSL mode? Aside from some minor CPU overhead, I presume... I'm in private subnet and my bricks are on encrypted EBS volumes, access controlled entirely by security groups... is adding SSL overkill?
23:36 partner seriously Joe, keep writing, awesome blogging, read through so many times and as you yourself refer often here, its a really useful resource
23:36 JoeJulian That's it, rcampbel3, just the overhead.
23:37 JoeJulian Thank you, partner.
23:37 partner JoeJulian: no, thank You.
23:38 partner i wish i got all my stuff out somehow, i just cannot figure out a moment to write about anything anywhere, its just not built into me while i love to share things with others
23:39 RicardoSSP joined #gluster
23:39 RicardoSSP joined #gluster
23:39 partner i cannot even get to bed in time.. :)
23:40 PorcoAranha joined #gluster
23:41 JoeJulian I never get to bed on time either.
23:42 JoeJulian I just made myself a goal of writing every Tuesday. I failed, but even in failing I was able to get something done.
23:46 partner so it happens tuesday is my "free night", sort of, relieved from family obligations, used to have band rehearsals, now.. plenty of free time? not..
23:57 partner as i don't speak english that well, and we have stupid hostnames, what would be the Y for the Gluster Analysis Y...?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary