Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 tom[] JoeJulian: ok. tnx
00:21 bennyturns joined #gluster
00:24 tdasilva joined #gluster
00:28 tristanz joined #gluster
00:30 joevartuli joined #gluster
00:31 fubada joined #gluster
00:35 joevartuli left #gluster
00:45 elico joined #gluster
00:45 troj joined #gluster
00:53 juhaj joined #gluster
00:55 troj joined #gluster
00:56 zerick joined #gluster
00:57 eshy joined #gluster
00:58 Alex Is striped-replicated 'supported' on Gluster 3.x now, or is it RH Technology Preview only?
01:05 jbrooks left #gluster
01:07 fubada joined #gluster
01:07 hagarth joined #gluster
01:10 chirino joined #gluster
01:13 bala joined #gluster
01:16 troj joined #gluster
01:32 troj joined #gluster
01:35 bala joined #gluster
01:45 troj joined #gluster
01:50 harish__ joined #gluster
01:50 haomaiwa_ joined #gluster
01:53 haomaiwang joined #gluster
02:05 joevartuli joined #gluster
02:08 haomai___ joined #gluster
02:12 haomaiwa_ joined #gluster
02:13 jobew_000 joined #gluster
02:13 haomaiwa_ joined #gluster
02:16 jobew_000 joined #gluster
02:17 jobewan joined #gluster
02:19 jobewan joined #gluster
02:27 plarsen joined #gluster
02:29 haomai___ joined #gluster
02:31 elico joined #gluster
02:32 glusterbot New news from newglusterbugs: [Bug 1121822] Cmockery2 is being linked against gluster applications <https://bugzilla.redhat.com/show_bug.cgi?id=1121822>
02:42 joevartuli joined #gluster
02:45 joevartuli joined #gluster
02:50 joevartuli left #gluster
02:54 siel joined #gluster
02:54 siel joined #gluster
03:01 elico joined #gluster
03:07 anoopcs joined #gluster
03:07 chirino joined #gluster
03:11 elico joined #gluster
03:12 troj joined #gluster
03:22 nbalachandran joined #gluster
03:26 troj joined #gluster
03:29 bharata-rao joined #gluster
03:33 troj joined #gluster
03:45 itisravi joined #gluster
03:45 shubhendu joined #gluster
03:51 atinmu joined #gluster
04:03 joevartuli joined #gluster
04:09 joevartuli joined #gluster
04:20 nishanth joined #gluster
04:25 meghanam joined #gluster
04:25 meghanam_ joined #gluster
04:30 Rafi_kc joined #gluster
04:31 haomaiwang joined #gluster
04:35 Peter3 Hi I am still stuck with a quota issue
04:35 Peter3 [2014-07-22 04:24:37.689348] E [cli-cmd-volume.c:1351:cli_cmd_quota_handle_list_all] 0-cli: Failed to get quota limits for abd01245-73e1-4ef6-aba6-dc087cf0bccd
04:36 Peter3 whenever i try to do a "gluster volume quota sas02 list"
04:36 Peter3 i got this from cli.log
04:36 Peter3 it's been like this for couple weeks
04:36 Peter3 how can i get rid of this error?
04:36 dusmant joined #gluster
04:37 Peter3 any idea JoeJulian and hagarth?
04:38 anoopcs joined #gluster
04:39 kdhananjay joined #gluster
04:39 Peter3 seems like the gfid abd01245-73e1-4ef6-aba6-dc087cf0bccd is long gone
04:39 kumar joined #gluster
04:39 Peter3 how could the quota still look for abd01245-73e1-4ef6-aba6-dc087cf0bccd ?
04:45 Peter3 where does the directory quota info store?
04:45 Peter3 i wonder where does the gfid quota limit store?
04:47 raghu joined #gluster
04:47 raghu joined #gluster
04:53 elico joined #gluster
05:02 kshlm joined #gluster
05:10 kanagaraj joined #gluster
05:12 jobewan joined #gluster
05:12 Humble joined #gluster
05:16 ppai joined #gluster
05:18 aravindavk joined #gluster
05:21 mjrosenb joined #gluster
05:23 itisravi_ joined #gluster
05:24 shubhendu joined #gluster
05:24 prasanth|offline joined #gluster
05:24 sac`away joined #gluster
05:25 glusterbot New news from resolvedglusterbugs: [Bug 1115748] Bricks are unsync after recevery even if heal says everything is fine <https://bugzilla.redhat.com/show_bug.cgi?id=1115748>
05:25 kumar joined #gluster
05:28 ndarshan joined #gluster
05:30 haomaiwa_ joined #gluster
05:37 sac`away joined #gluster
05:37 ndarshan joined #gluster
05:37 prasanth_ joined #gluster
05:37 kshlm joined #gluster
05:37 prasanth|offline joined #gluster
05:37 meghanam_ joined #gluster
05:38 kanagaraj joined #gluster
05:39 shubhendu joined #gluster
05:43 LebedevRI joined #gluster
05:43 psharma joined #gluster
05:45 kumar joined #gluster
05:54 haomaiw__ joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 haomai___ joined #gluster
06:02 joevartuli left #gluster
06:04 elico joined #gluster
06:06 kumar joined #gluster
06:07 Philambdo joined #gluster
06:11 vu joined #gluster
06:14 lalatenduM joined #gluster
06:16 firemanxbr_ joined #gluster
06:18 kumar joined #gluster
06:36 vu joined #gluster
06:39 Peter1 joined #gluster
06:39 Peter1 left #gluster
06:42 prasanth_ joined #gluster
06:42 itisravi_ joined #gluster
06:43 R0ok_ joined #gluster
06:45 ws2k3 Hello, Any progress on adding multi master to glusterfs ?
06:45 ricky-ti1 joined #gluster
06:48 ekuric joined #gluster
06:51 shylesh__ joined #gluster
07:02 kanagaraj joined #gluster
07:07 ctria joined #gluster
07:08 ndarshan joined #gluster
07:10 keytab joined #gluster
07:12 shylesh__ joined #gluster
07:17 aravindavk joined #gluster
07:17 andreask joined #gluster
07:22 cultavix joined #gluster
07:31 giannello joined #gluster
07:41 andreask joined #gluster
07:46 cyberbootje joined #gluster
07:47 ccha2 for geo-replication ?
07:49 TvL2386 joined #gluster
07:55 vu joined #gluster
07:55 ndarshan joined #gluster
08:00 aravindavk joined #gluster
08:16 saurabh joined #gluster
08:19 Peter3 joined #gluster
08:19 Peter3 two bricks on one node crashed
08:19 Peter3 error from the brick log:
08:19 Peter3 http://pastie.org/9411618
08:19 glusterbot Title: #9411618 - Pastie (at pastie.org)
08:20 Peter3 any clue why it crasheD?
08:38 cyberbootje joined #gluster
08:55 harish__ joined #gluster
09:17 itisravi joined #gluster
09:25 ppai joined #gluster
09:31 ctria joined #gluster
09:34 haomaiwa_ joined #gluster
09:52 nbalachandran joined #gluster
09:54 FooBar joined #gluster
09:55 torbjorn__ joined #gluster
09:55 ccha2 joined #gluster
09:55 JoeJulian joined #gluster
09:55 mdavidson joined #gluster
09:55 klaas joined #gluster
09:55 silky joined #gluster
09:59 Slashman joined #gluster
10:06 jiffin joined #gluster
10:07 bala1 joined #gluster
10:08 kanagaraj joined #gluster
10:14 gehaxelt joined #gluster
10:14 ekuric joined #gluster
10:18 swebb joined #gluster
10:20 haomai___ joined #gluster
10:27 shubhendu_ joined #gluster
10:28 shubhendu_ joined #gluster
10:28 Humble joined #gluster
10:52 anoopcs joined #gluster
10:59 Humble joined #gluster
10:59 diegows joined #gluster
11:05 ricky-ti1 joined #gluster
11:12 davent joined #gluster
11:13 davent OK, this is doing my nut, someone must know the answer to this: How do you set the TCP port for a brick? the damn things appears to pick a port at random which make working begind firewalls a nightmare
11:14 edward1 joined #gluster
11:16 ctria joined #gluster
11:30 ctria joined #gluster
11:31 FooBar davent: Ensure that TCP ports 111, 24007,24008, 24009-(24009 + number of bricks across all volumes) are open on all Gluster servers. If you will be using NFS, open additional ports 38465 to 38467.
11:31 FooBar they should enumerate from 24009 onwards
11:34 davent they don't
11:34 davent let me rephrase that
11:34 davent they do...to start with
11:34 glusterbot New news from newglusterbugs: [Bug 1118311] After enabling nfs.mount-udp mounting server:/volume/subdir fails <https://bugzilla.redhat.com/show_bug.cgi?id=1118311>
11:34 davent then if you stop and restart volumes they can sometimes pick a totally different port
11:35 davent one rick went from 49152 to 49153 to 49182
11:35 davent +b
11:36 ramteid joined #gluster
11:39 deepakcs joined #gluster
11:42 cultavix joined #gluster
11:54 Humble joined #gluster
11:57 Rafi_kc joined #gluster
12:04 glusterbot New news from newglusterbugs: [Bug 1122028] Unlink fails on files having no trusted.pgfid. xattr when linkcount>1 and build-pgfid is turned on. <https://bugzilla.redhat.com/show_bug.cgi?id=1122028>
12:16 ctria joined #gluster
12:18 hchiramm_ joined #gluster
12:20 jdarcy joined #gluster
12:25 kdhananjay joined #gluster
12:29 cultavix joined #gluster
12:31 davent left #gluster
12:32 Humble joined #gluster
12:33 cristov joined #gluster
12:34 glusterbot New news from newglusterbugs: [Bug 1122037] [Dist-geo-rep] : In a cascaded setup, after hardlink sync, slave level 2 volume has sticky bit files found on mount-point. <https://bugzilla.redhat.com/show_bug.cgi?id=1122037> || [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.com/show_bug.cgi?id=1058300>
12:35 gmcwhistler joined #gluster
12:41 ekuric joined #gluster
12:45 maptz1 joined #gluster
12:50 nbalachandran joined #gluster
12:51 theron joined #gluster
12:53 theron_ joined #gluster
12:54 nullck joined #gluster
12:54 B21956 joined #gluster
12:56 julim joined #gluster
13:01 hagarth joined #gluster
13:02 Humble joined #gluster
13:03 bennyturns joined #gluster
13:14 tdasilva joined #gluster
13:18 kdhananjay joined #gluster
13:18 rwheeler joined #gluster
13:22 kdhananjay joined #gluster
13:23 sjm joined #gluster
13:33 ekuric1 joined #gluster
13:37 bala joined #gluster
13:39 cultavix joined #gluster
13:48 vshankar joined #gluster
13:49 R0ok_ joined #gluster
13:59 Peter4 joined #gluster
13:59 Peter4 help! My entire gluster just crashed :(
14:01 Peter4 bricks of a volume crashed after i disabled and reenabled quota
14:05 JoeJulian Peter4: brick log
14:05 Peter4 http://pastie.org/9411618
14:05 glusterbot Title: #9411618 - Pastie (at pastie.org)
14:06 anoopcs joined #gluster
14:07 mortuar joined #gluster
14:08 Peter4 how should i bring up the bricks now?
14:08 Peter4 after i disabled and reenable the quota
14:08 JoeJulian sure. gluster volume start $vol force
14:09 JoeJulian that /should/ bring them back up
14:09 Peter4 the volume seems still up
14:09 JoeJulian hence the need for the word force
14:10 Peter4 http://pastie.org/9412255
14:10 glusterbot Title: #9412255 - Pastie (at pastie.org)
14:10 Peter4 just most of the bricks were down
14:10 Peter4 thus lost some files
14:11 Peter4 would the volume start force bring up all the bricks?
14:11 Peter4 or i should bring up the bricks indivually?
14:11 JoeJulian It should bring up all the bricks
14:12 wushudoin joined #gluster
14:12 JoeJulian After you're back up, let's see if it dumped core.
14:13 getup- joined #gluster
14:13 bene2 joined #gluster
14:13 Peter4 where should be the dumped core?
14:13 JoeJulian /
14:13 Peter4 ok
14:13 Peter4 seems like NFS on some nodes are down
14:14 JoeJulian restart glusterd on those
14:14 Peter4 ok
14:16 Peter4 yes large core file!
14:16 Peter4 may I know how do i read it?
14:16 JoeJulian gdb --core=$corefile
14:17 JoeJulian Please file a bug report. Include the crash report you pastied and attach that core file.
14:17 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:17 JoeJulian The devs can use that to figure out what went wrong and fix it.
14:18 Peter4 Core was generated by `/usr/sbin/glusterfsd -s glusterprod001.shopzilla.laxhq --volfile-id sas02.glust'.
14:18 JoeJulian If you're all up and running, I've got a PT appointment to get to.
14:18 Peter4 Program terminated with signal 11, Segmentation fault.
14:18 JoeJulian yep
14:18 Peter4 ok thanks
14:19 Peter4 thanks joejulian!
14:19 JoeJulian Please file that bug.
14:19 Peter4 it's up now
14:19 Peter4 not sure how long
14:19 Peter4 how do i see the entire bug report?
14:19 Peter4 it's just a few lines
14:19 Peter4 sorry
14:21 bennyturns joined #gluster
14:22 JoeJulian You have to file it before there is one. Go to the link given by glusterbot after I say the key phrase "file a bug". Log in. Add the text you pastied with the crash report. Attach the core file (probably should gzip it). Fill in the rest of the fields as you think are correct, and submit it. Let me know the bug ID. I'm interested in following it.
14:22 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:25 Peter4 ok thanks
14:25 Peter4 and the quota on some directory just went up and filled up
14:25 Peter4 i will file a bug
14:25 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:27 jbrooks joined #gluster
14:29 shubhendu_ joined #gluster
14:30 lmickh joined #gluster
14:32 jobewan joined #gluster
14:38 _dist joined #gluster
14:39 Peter4 Still seeing brick crashing :(
14:40 Peter4 bricks on the same volume still crashing
14:42 cmtime joined #gluster
14:42 doo joined #gluster
14:49 anoopcs1 joined #gluster
14:51 kanagaraj joined #gluster
14:52 tziOm joined #gluster
14:59 recidive joined #gluster
15:01 Peter4 https://bugzilla.redhat.com/show_bug.cgi?id=1122120
15:01 glusterbot Bug 1122120: urgent, unspecified, ---, gluster-bugs, NEW , Bricks crashing after disable and re-enabled quota on a volume
15:01 kdhananjay joined #gluster
15:04 Peter4 any devl here?
15:04 Peter4 my bricks are keep crashing left and right :(
15:05 glusterbot New news from newglusterbugs: [Bug 1122120] Bricks crashing after disable and re-enabled quota on a volume <https://bugzilla.redhat.com/show_bug.cgi?id=1122120>
15:10 sputnik13 joined #gluster
15:11 kkeithley1 joined #gluster
15:14 shubhendu_ joined #gluster
15:15 premera joined #gluster
15:17 sputnik13 joined #gluster
15:18 ndk joined #gluster
15:20 Lee- joined #gluster
15:22 Lee- Hello. I'm running debian 7 with gluster 3.2. I have 2 servers with a replicated volume. I have a single client. As a test, I reboot one of my servers, but the client hangs until the server comes back. I read about  42 second timeout some people were having before the client determined the server is down, but my client appears to hang indefinitely until the server comes back up. It doesn't seem to matter which of the 2 servers I take offline, I get the
15:22 Lee- same results. Is this a bug in 3.2.7 or am i misunderstanding something about replication configuration? Any assistance greatly appreciated. Thanks.
15:22 tziOm joined #gluster
15:27 jiffin joined #gluster
15:32 Lee- Turns out I was wrong. It appears if I let it go that it finally unblocks after somewhere between 10 and 15 mins
15:38 kanagaraj joined #gluster
15:39 jiffin joined #gluster
15:40 plarsen joined #gluster
15:42 ekuric1 left #gluster
15:45 plarsen joined #gluster
15:46 andreask joined #gluster
15:51 nueces joined #gluster
15:52 Peter4 semiosis u here?
15:53 jiffin1 joined #gluster
15:54 Lee- I found a value that may be related to my issue -- network.frame-timeout. What I can't find is a way to check what the current value is. I see there is a "set" command, but no "get". Is there a way to see what these current volume options are?
15:56 Lee- http://gluster.org/community/documentation/index.php/Gluster_3.2:_Setting_Volume_Options -- This page states that my network.frame-timeout should be 30 minutes, but I timed the "ls" command and it hung for 15min38 seconds. I want to get a list of all of the current configuration settings to see if there is something that falls within this time frame that would make sense to be the culprit.
15:56 glusterbot Title: Gluster 3.2: Setting Volume Options - GlusterDocumentation (at gluster.org)
15:57 sage joined #gluster
15:57 jiffin joined #gluster
16:06 _dist joined #gluster
16:07 _dist I'm wondering if anyone has seen read(....) = -1 EAGAIN (resource temporarily unavailable) in a qemu strace running off fuse or libgfapi (libgfapi in my case)
16:08 jiffin1 joined #gluster
16:10 anoopcs joined #gluster
16:12 maptz1 left #gluster
16:16 anoopcs1 joined #gluster
16:18 jiffin joined #gluster
16:23 Lee- I can't even find the definition of this value in the source code by grepping it. Only a single match: xlators/mgmt/glusterd/src/glusterd-volgen.c:        {"network.frame-timeout",                "protocol/client",    NULL, NULL, NO_DOC, 0     },
16:23 Lee- Even though the web page says that it's default value is hard coded
16:37 JoeJulian ~pasteinfo | Lee-
16:37 glusterbot Lee-: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
16:37 edwardm61 joined #gluster
16:37 jiffin1 joined #gluster
16:37 anoopcs joined #gluster
16:38 Lee- http://fpaste.org/119869/47106140/
16:38 glusterbot Title: #119869 Fedora Project Pastebin (at fpaste.org)
16:39 JoeJulian Ok, that looks right. How about a client log where you're experiencing the problem. (/var/log/glusterfs/mount-point.log)
16:40 JoeJulian also 3.2 is really old.
16:40 JoeJulian @ppa
16:40 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
16:42 daMaestro joined #gluster
16:43 theron joined #gluster
16:44 anoopcs joined #gluster
16:51 anoopcs1 joined #gluster
16:52 jiffin joined #gluster
16:52 anoopcs joined #gluster
16:55 Lee- ok earlier it was consistently taking ~15mins. Now it seems to be consistently taking around 50 seconds, so I suspect this is the network.ping-timeout. I've recreated these test VMs so many times now. I don't have logs from when it was taking 15 mins and I can't seem to get it to take more than 1 minute now. Earlier it was as simple as mounting the volume on the client, touch test1, shut down a server, rm test1
16:55 Lee- now that same process only hangs for 50 seconds
16:56 Lee- I gather that 3.2 is old enough that I probably should avoid it. I'll try recreating these VMs with a newer package
16:59 tdasilva joined #gluster
17:04 Philambdo joined #gluster
17:07 Peter4 JoeJulian, the bricks still keep crashing…..any reason why?
17:07 vshankar joined #gluster
17:10 nishanth joined #gluster
17:11 zerick joined #gluster
17:13 Humble joined #gluster
17:14 MacWinner joined #gluster
17:22 rotbeard joined #gluster
17:23 tristanz joined #gluster
17:24 _dist JoeJulian: I almost have my test env finished, then I'll confirm the problem still exists, then upgrade to 3.5.1 and see if goes away
17:27 glusterbot New news from resolvedglusterbugs: [Bug 1121347] NFS: When running rm -rf files / directories are not deleted. <https://bugzilla.redhat.com/show_bug.cgi?id=1121347>
17:28 demonicsage joined #gluster
17:28 demonicsage hi guys
17:28 demonicsage Anyone around? :)
17:28 demonicsage I'm having problem mounting my gluster volumes on a client VPS :(
17:29 demonicsage the volume setup on two other instances went fine. (I'm using this guide from DigitalOcean: https://www.digitalocean.com/community/tutorials/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-servers)
17:29 glusterbot Title: How To Create a Redundant Storage Pool Using GlusterFS on Ubuntu Servers | DigitalOcean (at www.digitalocean.com)
17:30 demonicsage But when I mount it on client VPS, it failed. I checked the log files. It's showing [name.c:249:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host gluster1.server
17:30 hagarth joined #gluster
17:31 demonicsage oh shit... I forgot to put the hostnames in /etc/hosts on the client VPS
17:32 _dist I did that not to far back, you'll likely need to stop all the server services, I ended up with locks because of it
17:32 demonicsage Damn
17:33 demonicsage yeah, i just realized my stupid mistake
17:33 demonicsage :S embarrassing
17:33 demonicsage i will just put exceptions for the Gluster ports in ufw
17:39 bala joined #gluster
17:43 _dist JoeJulian: is can the upgrade process from 3.4.2 to 3.5.1 be done live? I'm going to do it in a test cluster but I'm just curious what to expect
17:43 _dist .... is can
17:46 vpshastry joined #gluster
17:46 chirino joined #gluster
17:58 jiku joined #gluster
17:59 demonicsage eh a question _dist, why upgrade to 3.5.1 though? Isn't 3.4.* branch stable version now?
18:03 _dist because I have a healing issue with VM files that there was a patch for in 3.5.1 and I want to see if it resolves my issue in 3.4.2
18:05 glusterbot New news from newglusterbugs: [Bug 1122186] Compilation fails if configured with --disable-xml-output option <https://bugzilla.redhat.com/show_bug.cgi?id=1122186>
18:05 demonicsage ah
18:08 giannello joined #gluster
18:10 bene2 joined #gluster
18:11 kkeithley_ 3.4.x is legacy stable/production. Will EOL in a few months probably. 3.5.x is, IMO, also stable. If I were setting up a new system I'd got straight to 3.5.1. We continue to support 3.4 with updates (3.4.5 will be released soon) for people running existing systems who don't want to make the leap to 3.5.
18:12 kkeithley_ s/Will EOL/It will EOL/
18:12 glusterbot What kkeithley_ meant to say was: 3.4.x is legacy stable/production. It will EOL in a few months probably. 3.5.x is, IMO, also stable. If I were setting up a new system I'd got straight to 3.5.1. We continue to support 3.4 with updates (3.4.5 will be released soon) for people running existing systems who don't want to make the leap to 3.5.
18:34 xleo joined #gluster
18:38 nage joined #gluster
18:39 cultavix joined #gluster
18:42 ekuric joined #gluster
18:49 B21956 joined #gluster
18:56 DanishMan joined #gluster
19:22 bala joined #gluster
19:23 _dist I've got my test gluster up and running, and it definitely has the same "healing issue"
19:25 chirino joined #gluster
19:27 _dist JoeJulian: if I'm upgrading gluster to 3.5.1 will I need to recompile qemu? I'm suspicious the effects won't take place if I'm still using an old .h for the qemu compile
19:31 codex joined #gluster
19:39 JoeJulian I haven't checked the header. Did anything change in it?
19:40 _dist well, I'm not having luck after upgrading the server, libgfapi is complaining about timeouts
19:40 JoeJulian What's the complaint look like?
19:41 _dist I'm going to have to run it more "manually" (doing now) to get the specific error
19:49 dberry joined #gluster
19:51 _Bryan_ joined #gluster
19:51 sputnik13 joined #gluster
19:52 _dist looks like it actually kills the kvm module
19:53 _dist that doens't make sense though, let me look into further
19:57 theY4Kman joined #gluster
19:58 theY4Kman When I run `gluster volume quota home list /mypath`, I get output in the format "/mypath        400.000000GB              355.2GB". Is there any way to get those sizes in bytes?
20:00 JoeJulian I doubt it would be different, but try "gluster --xml ..."
20:01 theY4Kman Ooh, good to know that exists, but you're right, same output
20:01 theY4Kman I got a bonus coredump, too
20:02 JoeJulian ooh
20:02 JoeJulian What version?
20:03 theY4Kman "glusterfs 3.4.2 built on Jan  3 2014 12:38:06"
20:03 JoeJulian I wonder if that's fixed already.
20:05 JoeJulian If it's related to the fixed string size that used to be used with quotas, it may be fixed in a newer version.
20:05 theY4Kman I'll try updating
20:05 JoeJulian If you can try that and still get a core dump, can you please file a bug report?
20:05 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
20:05 theY4Kman My ops guy is gonna kill me if I fuck it up :P
20:06 Pork__ joined #gluster
20:06 JoeJulian Once it's effed up, it's ops problem. He can't kill you 'till he fixes it. That should give you enough time to hide.
20:08 theY4Kman Hahahaha
20:09 Pork__ Hey again. I have replaced one of my servers, and now one of my two-brick replica setups is missing a brick. I have h1:/b1 and h2:/b1. h2:/b1 no longer exists, and I need to replace it with h2:/b2, then heal the data from h1:/b1. Can someone tell me what to google for?
20:09 Pork__ If I look up anything related to "replace-brick", all I get is the depricated command
20:11 Pork__ I have probed the new h2 machine from h1, and the brick h2:/b2 is ready
20:11 Pork__ Looks like h1 is still looking for h2:/b1
20:12 JoeJulian If you try to run the depricated command, what does it say?
20:13 Pork__ JoeJulian: I suppose I could try
20:13 Pork__ JoeJulian: To be honest, I was scared
20:16 _dist JoeJulian: I'm upgrading to 2x qemu
20:20 Pork__ JoeJulian: Looks like gluster replace-brick h2:/b1 h2:/b2 doesn't recognize the command
20:23 Pork__ These mail sites are incredibly hard to read
20:29 Pork__ JoeJulian: I was doing it wrong
20:30 Pork__ JoeJulian: gluster volume replace-brick VOLNAME h2:/b1 h2:/b2
20:30 Pork__ JoeJulian: gluster volume replace-brick VOLNAME h2:/b1 h2:/b2 start
20:32 Pork__ JoeJulian: error: brick: h1:/b1 does not exist in volume VOLNAME
20:34 Pork__ Is it possible to first remove the no-longer-existent brick, then add one to replace it?
20:41 andreask joined #gluster
20:46 bala joined #gluster
20:51 theron joined #gluster
21:00 CodeMonke joined #gluster
21:07 Peter4 brick process keep dying…..is there a way to auto restart??
21:07 Peter4 sorry it's been a long day....
21:17 Pork__ joined #gluster
21:17 Pork__ Well, that didn't work
21:17 Pork__ Not I can't get my server to stay started
21:17 Pork__ Now***
21:23 gehaxelt Hey, short question: I have a replica 2 setup with 2 servers. Can I add another brick (server 3) with "add-brick" ?
21:23 gehaxelt Do I have to take care of something?
21:25 Pork__ d
21:25 Pork__ gehaxelt: I just got screwed trying to do that
21:25 Pork__ A few minutes ago
21:25 gehaxelt uh
21:26 gehaxelt shit :d
21:26 gehaxelt Pork__, what happened?
21:28 Pork__ gehaxelt: Both systems locked up (was trying to add third brick)
21:29 Pork__ gehaxelt: I am looking to see if there was any damage done
21:30 Pork__ gehaxelt: Now I am getting this really weird behavior: My glusterfs-server starts, then stops on its own after a few seconds
21:30 giannello joined #gluster
21:30 gehaxelt uh
21:31 gehaxelt hmm, I'm a bit scared now.
21:31 gehaxelt Don't want to f**k up my systems...
21:32 Pork__ gehaxelt: I think that the data is fine
21:32 gehaxelt okay then :)
21:32 Pork__ gehaxelt: I really can't get the glusterfs-server to work
21:32 gehaxelt Any useful stuff in the logfiles?
21:33 kkeithley_ gehaxelt: if you add just one more brick, then you have to go to replica 3.  If you want to stay at replica 2 then you need to add two more bricks.
21:34 JoeJulian @lucky how to expand glusterfs by one server
21:34 glusterbot JoeJulian: http://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
21:34 JoeJulian gehaxelt: ^
21:35 JoeJulian Pork__: Sorry, I'm on a train and losing connection. In that replace-brick command you just posted, you forgot the volume name
21:35 Pork__ gehaxelt: looking at them now
21:35 gehaxelt JoeJulian, thanks, I'll look at it
21:36 sjm left #gluster
21:36 Pork__ JoeJulian: Thanks, bro
21:47 cultavix joined #gluster
21:49 bennyturns joined #gluster
21:50 cultavix joined #gluster
21:52 cultavix joined #gluster
21:52 bennyturns joined #gluster
22:00 cultavix joined #gluster
22:06 CodeMonke joined #gluster
22:11 harish__ joined #gluster
22:12 Pork__ joined #gluster
22:14 Pork__ gehaxelt: Stil here, bro?
22:15 gehaxelt Pork__, yep
22:15 gehaxelt What's up?
22:15 Pork__ gehaxelt: Seems to be working
22:15 gehaxelt Cool :)
22:15 Pork__ gehaxelt: I have no idea why it locked up when I added the brick, but the systems are back, and they're syncing
22:16 gehaxelt great
22:17 Pork__ gehaxelt: still really weird
22:17 Pork__ gehaxelt: did it work for you?
22:17 gehaxelt hmm. Haven't tried it yet.
22:18 gehaxelt I'll do that at the weekend.
22:18 gehaxelt In case something goes wrong, I'll have some more time to fix the stuff.
22:19 gehaxelt But thanks for sharing your experiences :)
22:23 CodeMonke Is there a way to set up my storage bricks on two different networks?  i.e. each has 2 NICS with a private network for storage peering and a seperate one for clients.
22:24 CodeMonke I looked through the docs but it wasn't so clear.
22:26 qdk joined #gluster
22:27 kkeithley1 joined #gluster
22:29 gehaxelt CodeMonke, iptables?
22:29 gehaxelt Afaik the 240xx ports are for peering and the 491xx ports are for clients/bricks.
22:30 CodeMonke That would only work with TCP though.  My hope was to go over RDMA.
22:31 gehaxelt hmm
22:33 CodeMonke If I just use hostnames though, will it all "work" if I just have the servers use different IP's with the same hostnames (static in the /etc/hosts file)?
22:34 CodeMonke They still use the IP's via IPoIB for adressing anyways.
22:34 ultrabizweb I dont know much about RDMA but I have a 2 node replicated and the second node is on a KVM vps in another datacenter
22:39 rotbeard joined #gluster
22:39 CodeMonke I wish there was a virtual IB device for kvm that I could test it with.  That'd be nice.
22:40 CodeMonke There's sr-iov but that's just taping into real IB hardware.  It'd be nice to have a completely virtual IB device.
22:43 ultrabizweb infinity band?
22:44 ultrabizweb things are moving forward I heard the other day they did 50GB over standard copper
22:44 ultrabizweb telco line
22:44 ultrabizweb for short distance
22:45 ultrabizweb 1GB for about 1 quarter of a mile I could be wrong but if it takes off looks very good.
22:45 CodeMonke InfiniBand has a way better bang for your buck though if you're applications can use it.
22:46 CodeMonke Far less protocol stack overhead than TCP or even the underlying ethernet.
22:47 ultrabizweb yea for the moment no doubt I heard euipment is getting cheaper, personally I've never worked with it but not because I dont want to.
22:47 CodeMonke It's just not so general as ethernet.  With such a low latency though you can get excelent IOPS that you can't really approach with ethernet.
22:48 ultrabizweb cool sorry it was 10gbs not 50gbs still impressive
22:49 ultrabizweb http://www.alcatel-lucent.com/press/2014/alcatel-lucent-sets-new-world-record-broadband-speed-10-gbps-transmission-data-over-traditional
22:49 glusterbot Title: Alcatel-Lucent sets new world record broadband speed of 10 Gbps for transmission of data over traditional copper telephone lines | Alcatel-Lucent (at www.alcatel-lucent.com)
22:59 Pupeno joined #gluster
23:00 Peter4 is that possible to convert a replica 2 volume to a distribute volume?
23:08 tyl0r joined #gluster
23:08 tyl0r left #gluster
23:10 tyl0r joined #gluster
23:11 tristanz joined #gluster
23:12 firemanxbr joined #gluster
23:15 pdrakewe_ joined #gluster
23:18 sauce joined #gluster
23:25 jvandewege joined #gluster
23:35 tyl0r joined #gluster
23:39 firemanxbr joined #gluster
23:42 Pupeno joined #gluster
23:55 tristanz joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary