Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 sputnik13 joined #gluster
00:23 qdk joined #gluster
00:28 RicardoSSP joined #gluster
00:28 RicardoSSP joined #gluster
00:33 recidive joined #gluster
00:34 coredump joined #gluster
00:34 sputnik13 joined #gluster
00:35 xrandr joined #gluster
00:36 xrandr Does 2 gluster hosts have to be on the same network? I have 2 servers from 2 different providers that I want to use gluster with
00:59 sputnik13 joined #gluster
01:20 vimal joined #gluster
01:26 lyang0 joined #gluster
01:28 topshare joined #gluster
01:49 topshare joined #gluster
01:58 bala joined #gluster
02:03 topshare joined #gluster
02:08 kombucha_ joined #gluster
02:11 jiku joined #gluster
02:20 sputnik13 joined #gluster
02:27 haomaiwa_ joined #gluster
02:35 haomaiwa_ joined #gluster
02:43 haomaiw__ joined #gluster
02:49 guntha joined #gluster
02:49 guntha joined #gluster
02:56 sputnik13 joined #gluster
03:26 bharata-rao joined #gluster
03:42 shubhendu joined #gluster
03:50 jezier joined #gluster
03:51 itisravi joined #gluster
03:52 hagarth joined #gluster
03:54 kanagaraj joined #gluster
04:07 ndarshan joined #gluster
04:08 KKA left #gluster
04:10 hchiramm_ joined #gluster
04:13 suliba joined #gluster
04:35 recidive joined #gluster
04:37 gildub joined #gluster
04:39 ramteid joined #gluster
04:40 suliba joined #gluster
04:42 RameshN joined #gluster
04:44 rafi1 joined #gluster
04:44 anoopcs joined #gluster
04:49 meghanam joined #gluster
04:49 dusmant joined #gluster
04:50 nbalachandran joined #gluster
04:51 kombucha_ joined #gluster
04:52 ppai joined #gluster
04:55 suliba joined #gluster
04:57 kdhananjay joined #gluster
04:59 nthomas joined #gluster
05:11 rjoseph joined #gluster
05:13 jiffin joined #gluster
05:15 kshlm joined #gluster
05:16 gildub joined #gluster
05:19 hagarth joined #gluster
05:30 spandit joined #gluster
05:35 glusterbot New news from newglusterbugs: [Bug 1119328] Remove libgfapi python example code from glusterfs source <https://bugzilla.redhat.com/show_bug.cgi?id=1119328>
05:43 nshaikh joined #gluster
05:48 nbalachandran joined #gluster
06:03 karnan joined #gluster
06:04 bala joined #gluster
06:09 R0ok_ joined #gluster
06:10 rjoseph1 joined #gluster
06:15 deepakcs joined #gluster
06:16 Bardack joined #gluster
06:20 haomaiwa_ joined #gluster
06:23 dusmant joined #gluster
06:24 ws2k33 joined #gluster
06:26 nthomas joined #gluster
06:28 kdhananjay joined #gluster
06:28 tomased joined #gluster
06:32 atalur joined #gluster
06:36 RaSTar joined #gluster
06:38 RaSTar joined #gluster
06:42 karnan joined #gluster
06:47 lalatenduM joined #gluster
06:53 RaSTar joined #gluster
06:55 RaSTar joined #gluster
06:56 RaSTar joined #gluster
06:56 saurabh joined #gluster
06:56 dusmant joined #gluster
06:58 rjoseph joined #gluster
07:01 nthomas joined #gluster
07:06 ricky-ticky1 joined #gluster
07:06 kombucha_ joined #gluster
07:15 stickyboy Anyone have experience with Solarflare 10GbE NICs?
07:15 stickyboy Looking to buy some NICs for new servers.  We usually use Intel NICs, but I'm curious about Solarflare.
07:24 dusmant joined #gluster
07:24 rjoseph joined #gluster
07:25 hagarth joined #gluster
07:30 fsimonce joined #gluster
07:56 saurabh joined #gluster
08:00 _Spiculum joined #gluster
08:01 rjoseph joined #gluster
08:01 clutchk1 joined #gluster
08:01 T0aD- joined #gluster
08:01 dusmant joined #gluster
08:02 ninkotech_ joined #gluster
08:02 itisravi_ joined #gluster
08:02 JustinClift joined #gluster
08:03 edong23 joined #gluster
08:03 m0zes_ joined #gluster
08:04 tdasilva_ joined #gluster
08:04 cicero joined #gluster
08:04 Rydekull_ joined #gluster
08:04 vincent_1dk joined #gluster
08:04 anoopcs1 joined #gluster
08:05 nated_ joined #gluster
08:05 nated_ joined #gluster
08:05 glusterbot New news from newglusterbugs: [Bug 1134257] glusterfs-api package should pull glusterfs package as dependency <https://bugzilla.redhat.com/show_bug.cgi?id=1134257>
08:07 chirino_m joined #gluster
08:07 stickyboy_ joined #gluster
08:08 twx_ joined #gluster
08:08 portante_ joined #gluster
08:09 ThatGraemeGuy_ joined #gluster
08:09 DJCl34n joined #gluster
08:10 stickyboy joined #gluster
08:10 DJClean joined #gluster
08:11 Frank77 joined #gluster
08:11 hagarth joined #gluster
08:11 hagarth joined #gluster
08:12 avati joined #gluster
08:13 ramteid joined #gluster
08:15 haomaiwa_ joined #gluster
08:16 qdk joined #gluster
08:16 bjornar joined #gluster
08:24 haomai___ joined #gluster
08:30 bala1 joined #gluster
08:32 bala joined #gluster
08:35 haomaiwa_ joined #gluster
08:44 nthomas joined #gluster
08:45 dusmant joined #gluster
08:46 JustinClift joined #gluster
08:47 Slashman joined #gluster
08:57 ppai joined #gluster
09:03 lmickh joined #gluster
09:08 prasanth_ joined #gluster
09:14 glusterbot New news from resolvedglusterbugs: [Bug 824753] FEAT: Log improvements in clear-locks <https://bugzilla.redhat.com/show_bug.cgi?id=824753>
09:23 kkeithley joined #gluster
09:26 liquidat joined #gluster
09:36 glusterbot New news from newglusterbugs: [Bug 1134311] bug-860663.t testcase hangs the system <https://bugzilla.redhat.com/show_bug.cgi?id=1134311> || [Bug 1134305] rpc actor failed to complete successfully on OSX <https://bugzilla.redhat.com/show_bug.cgi?id=1134305>
09:37 haomaiwa_ joined #gluster
09:38 msvbhat joined #gluster
09:42 RameshN joined #gluster
09:43 portante joined #gluster
09:44 glusterbot New news from resolvedglusterbugs: [Bug 963541] glusterd : 'gluster volume status ' some times do not show active task and sometimes it shows tasks which are not active. <https://bugzilla.redhat.com/show_bug.cgi?id=963541>
09:44 hagarth joined #gluster
09:46 kkeithley joined #gluster
09:48 dblack joined #gluster
09:52 JustinClift joined #gluster
09:54 bala joined #gluster
09:57 dusmant joined #gluster
10:02 Pupeno joined #gluster
10:02 rturk|afk joined #gluster
10:03 bfoster joined #gluster
10:07 spandit joined #gluster
10:08 clyons joined #gluster
10:13 Rydekull joined #gluster
10:14 glusterbot New news from resolvedglusterbugs: [Bug 860663] DHT - Hash layout for Directory is changing when sub-volume is down and user does a rebalance operation, as a result at same directory level duplicate files can be created <https://bugzilla.redhat.com/show_bug.cgi?id=860663>
10:28 rolfb joined #gluster
10:33 kanagaraj joined #gluster
10:38 LebedevRI joined #gluster
10:40 bala joined #gluster
10:44 pdrakeweb joined #gluster
10:45 kkeithley1 joined #gluster
10:46 hagarth joined #gluster
11:01 ira joined #gluster
11:01 haomaiwa_ joined #gluster
11:05 haomaiw__ joined #gluster
11:06 haomai___ joined #gluster
11:11 RaSTar joined #gluster
11:15 ppai joined #gluster
11:21 rjoseph joined #gluster
11:23 chirino joined #gluster
11:28 shylesh__ joined #gluster
11:32 recidive joined #gluster
11:35 bala joined #gluster
11:45 ppai joined #gluster
11:50 chirino_m joined #gluster
11:59 pasqd how to install gluster 3.5.2 with apt-get ?
12:01 atinmu joined #gluster
12:03 pasqd nvm,  add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.5
12:04 diegows joined #gluster
12:05 kkeithley_ Gluster Community Meeting in #gluster-meeting right now
12:06 edward1 joined #gluster
12:06 jdarcy joined #gluster
12:11 itisravi joined #gluster
12:13 recidive joined #gluster
12:14 baoboa joined #gluster
12:16 ricky-ti1 joined #gluster
12:18 bene joined #gluster
12:20 bennyturns joined #gluster
12:25 sage joined #gluster
12:35 rjoseph joined #gluster
12:42 B21956 joined #gluster
12:49 emi2 joined #gluster
12:50 shubhendu joined #gluster
12:52 LHinson joined #gluster
12:57 topshare joined #gluster
12:57 julim joined #gluster
13:03 chirino joined #gluster
13:04 asku left #gluster
13:10 ekuric joined #gluster
13:10 theron joined #gluster
13:10 hflai joined #gluster
13:16 LHinson joined #gluster
13:21 theron joined #gluster
13:23 emi2 hello everybody
13:29 todakure :P
13:29 emi2 I need a little help here ... I have a web backend with apache/mod_fcgid/suexec/php all installed locally and document root is a fuse mounted glusterfs volume. Apache can serve static documents, but php complains: "[27-Aug-2014 16:25:02 Europe/Bucharest] PHP Fatal error:  Unknown: Failed opening required '/home/sites/<redacted>/public_html/index.html' (include_path='.:/usr/local/php5318s-cgi/lib/php') in Unknown on line 0". Running php from cli is ok. Running fro
13:30 UnwashedMeme joined #gluster
13:33 richvdh joined #gluster
13:38 kkeithley_ semiosis: FYI, I had to respin 3.5.2 Debian DPKGs because they were missing /usr/sbin/glfsheal in glusterfs-server (or any other subpackage). Perhaps you already fixed this in the Ubuntu DPKGS?  Anyway, just FYI
13:39 richvdh folks, how can I take one brick in a replicated pair out of service?
13:40 richvdh (for a reboot, for instance)
13:40 richvdh (and avoiding ping timeouts and I/O errors)
13:40 mojibake joined #gluster
13:40 kkeithley_ @later tell semiosis FYI I respun the 3.5.2 Debian DPKGs because they were missing /usr/sbin/glfsheal in glusterfs-server (or any other subpackage). Perhaps you already fixed this in the Ubuntu DPKGs? Anyway, just FYI
13:40 glusterbot kkeithley_: The operation succeeded.
13:48 chirino joined #gluster
13:49 msmith_ joined #gluster
13:54 recidive joined #gluster
13:55 todakure emi2: the file exist ?, the permission for acess is OK, for apache, or php ?
14:00 sputnik13 joined #gluster
14:02 sputnik13 joined #gluster
14:04 chirino joined #gluster
14:07 cristov joined #gluster
14:07 topshare joined #gluster
14:08 emi2 todakure: yes, the file exist on the volume and has the right permissions. For simplicity I've created an empty php file with the right user/group and 777 permisions . I still get the same error: http://ur1.ca/i2e46
14:08 glusterbot Title: #128959 Fedora Project Pastebin (at ur1.ca)
14:11 chirino joined #gluster
14:11 emi2 open, fstat64, read, seek system calls are ok; ioctl fails
14:11 wushudoin joined #gluster
14:12 gmcwhistler joined #gluster
14:12 recidive joined #gluster
14:13 B21956 joined #gluster
14:14 chirino_m joined #gluster
14:29 sputnik13 joined #gluster
14:30 todakure O.o
14:32 todakure stranger still have not found a similar error.
14:34 todakure that log is the syslog ?
14:36 xrandr emi2, have you set the storage owner uid and storage owner gid on the gluster volume?
14:36 xrandr you should set it to apache if apache is the only user that is going to read/write to the gluster volume
14:38 xrandr emi2,  gluster volume set VOL-NAME storage.owner-gid 48
14:38 xrandr emi2,  gluster volume set VOL-NAME storage.owner-uid 48
14:38 xrandr then remount the gluster volume
14:39 emi2 xrandr, I've mounted the volume as "glusterfs#s4:/web /home/sites/<redacted>/public_html fuse rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0" ... should I change it also here ?
14:41 xrandr yes
14:41 xrandr because you have it set to user_id=0 (root) and group_id=0 (root)
14:41 xrandr change the 0 to 48
14:41 xrandr and remount
14:42 xrandr your other option is to create a new group, add apache to that group, and use that group's gid in group_id to allow apache to read/write from the gluster volume
14:45 xrandr emi2, I had a similar issue, and that fixed it
14:46 cmtime joined #gluster
14:46 AdrianH joined #gluster
14:47 todakure node-02:/files                   /imagens       glusterfs defaults,_netdev 0 0
14:47 todakure on fstab
14:50 AdrianH Hello
14:50 glusterbot AdrianH: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:51 AdrianH I am having some trouble mounting Gluster on Ubuntu. I've got it to mount on Amazon Linux but nothing happens on Ubuntu. Am I missing something? I am doing the same thing on both servers...
14:51 coredump joined #gluster
14:53 xrandr AdrianH, what steps have you taken?
14:53 xrandr what does gluster peer status tell you?
14:54 emi2 xrandr, thanks! looks like volume set tricked worked ...
14:54 xrandr emi2, you're welcome! :)
14:54 todakure \0/
14:54 xrandr emi2, keep in mind you may have to this every time you mount the gluster volume unless you change the settings in fstab as well
14:55 UnwashedMeme left #gluster
14:55 AdrianH xrandr: it says i've got 3 peers
14:56 AdrianH the setup is working i've got it mounted on 2 Amazon Linux
14:56 AdrianH just Ubuntu refuses to mount gluster, I am using the same commands
14:58 todakure mount -t glusterfs server1:/test-volume /mnt/glusterfs
14:58 AdrianH yep
14:59 AdrianH mount -t glusterfs gluster1:/gluster-volume /home/ubuntu/gluster
15:00 AdrianH the command doesn't return anything and when I do df -h no gluster
15:00 emi2 xrandr, i provided the mount option "user_id=999,group_id=999" but in /proc/mounts still shows "user_id=0,group_id=0" ... so I think only the 'volume set' worked and I don't have to set it for every mount ... I'm still testing around, but it's a big step forward
15:00 emi2 !m xrandr
15:00 [o__o] You're doing good work, xrandr!
15:00 xrandr [o__o], thanks!
15:01 xrandr emi2, 999 is NOT apache
15:03 emi2 xrandr, yes it is in my case
15:03 xrandr it is?
15:03 xrandr apache is usally 48
15:04 xrandr AdrianH, can you message me the output of gluster peer status? Can you probe the client from the server and vice versa?
15:05 emi2 true, but here is customised ... installed from sources, users in ldap, etc
15:05 xrandr emi2, ah ok
15:07 xrandr AdrianH, does the ubuntu box have a firewall? If so, shut it down and try again?
15:07 topshare joined #gluster
15:08 xrandr emi2, LDAP makes my insides all warm and fuzzy
15:20 ilbot3 joined #gluster
15:20 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
15:20 jiffe98 joined #gluster
15:20 bene2 joined #gluster
15:21 semiosis kkeithley_: did you see i added glfs.h?  https://github.com/semiosis/glusterfs-debian/blob/wheezy-glusterfs-3.5/debian/changelog#L4
15:21 semiosis could you send me a PR for your glfsheal changes?
15:22 nbalachandran joined #gluster
15:22 theron joined #gluster
15:22 DJClean joined #gluster
15:22 nshaikh joined #gluster
15:22 Zordrak joined #gluster
15:22 msciciel1 joined #gluster
15:22 sspinner_ joined #gluster
15:22 siXy joined #gluster
15:22 Corey joined #gluster
15:22 tg2 joined #gluster
15:22 edwardm61 joined #gluster
15:22 torbjorn__ joined #gluster
15:22 tty00 joined #gluster
15:22 georgeh|workstat joined #gluster
15:22 gts joined #gluster
15:22 mikedep333 joined #gluster
15:22 tobias-_ joined #gluster
15:22 eightyeight joined #gluster
15:22 mjrosenb joined #gluster
15:22 swebb joined #gluster
15:22 lava_ joined #gluster
15:22 ccha2 joined #gluster
15:22 mibby joined #gluster
15:22 Chr1s1an joined #gluster
15:22 xavih joined #gluster
15:22 mrEriksson joined #gluster
15:23 kkeithley_ semiosis: no, I didn't see the added glfs.h. Where do you want me to file the PR?
15:24 cmtime kmai007, how much space is each brick?
15:24 semiosis kkeithley_: you overwrote my -2 with your -2 :/
15:24 kkeithley_ semiosis: ?
15:24 semiosis kkeithley_: i figured we could use my glusterfs-debian github repo to collab on the packaging.  guess i should've checked in with you about that instead of just assuming
15:25 kkeithley_ oh, that -2
15:25 kkeithley_ 3.5.2-2
15:25 semiosis yep
15:25 kkeithley_ Are you doing debian dkgs somewhere?
15:25 kkeithley_ I thought you were just Ubuntu PPA
15:26 semiosis i've been doing the debs on download.gluster.org for a while now, i just behind schedule on the last release
15:26 recidive joined #gluster
15:26 kkeithley_ oh, oops
15:26 semiosis :)
15:27 kkeithley_ your 3.5.2-2 are still there in .../apt/old
15:27 semiosis i'll do a -3 package tonight with glfsheal & glfs.h and update the github repo
15:28 kkeithley_ okay
15:44 saltsa_ joined #gluster
15:45 semiosis never mind the PR, i'll merge your changes by hand
15:45 emi2 left #gluster
15:46 UnwashedMeme joined #gluster
15:56 sputnik13 joined #gluster
16:14 recidive joined #gluster
16:16 pasqd joined #gluster
16:18 coredump joined #gluster
16:24 RameshN joined #gluster
16:25 tty00 joined #gluster
16:26 bala joined #gluster
16:31 sputnik13 does anyone use NFS as a primary client side interface to gluster?
16:32 sputnik13 just wondering how much load the NFS interface can handle
16:44 PeterA joined #gluster
16:44 qdk joined #gluster
16:45 glusterbot New news from resolvedglusterbugs: [Bug 1006367] dd on fuse mount hangs <https://bugzilla.redhat.com/show_bug.cgi?id=1006367>
16:46 hagarth joined #gluster
16:47 bala joined #gluster
16:52 lmickh joined #gluster
16:58 bene2 sputnik13, we have performance data for NFS on Gluster, want it?
16:59 sputnik13 uhhh, sure
16:59 sputnik13 that would be great :-)
16:59 jobewan joined #gluster
16:59 kmai007 from what i've experiened with gNFS, not much
16:59 fignews bene2, I would be interested also :D
16:59 kmai007 well i take that back
17:00 PeterA i have been having issue with gNFS
17:00 bene2 what version of gluster?
17:00 kmai007 dpeneding on what version, any configuration changes, will break down the gNFS graph, and cause headaches
17:00 kmai007 3.4.2-1
17:00 PeterA when file got renamed or gziped or moved….it complaint setattr issue
17:00 kmai007 PeterA: that find. didn't help you?
17:01 PeterA not much
17:01 kmai007 darn, not sure then,
17:01 PeterA i did the find from gluster client and nfs client
17:01 PeterA neither help much….
17:01 bene2 we are starting to regression test RHS releases (Gluster) on NFS, but we have run tests like this in the past, this was presented at red hat summit.
17:01 PeterA and i noticed more directories complaining with files got moved or renamed or gziped
17:01 chirino joined #gluster
17:02 kmai007 gluster logs is very noisey
17:02 bene2 correction: we are starting to performance-regression test in an automated way, but we have been doing it in a less automated way all along
17:02 kmai007 or INFO
17:02 kmai007 maybe change your loggging levels in gluster set
17:02 PeterA they appeared as Error \sE\s
17:03 PeterA so it seems an error to gluster node and looks like the error happen before the healing happen on the node
17:03 PeterA looks like it tho the renamed file still there and sent error
17:03 longshot902 joined #gluster
17:03 PeterA application did not see issue but gluster was expected to setattr or getattr on the renamed files
17:04 kmai007 makes sense
17:04 PeterA bug?
17:04 kmai007 if it was trying to heal it
17:04 PeterA heal info is all clear
17:04 kmai007 i dunno if its a 'bug'
17:04 PeterA and no heal-failed nor split-brain
17:05 kmai007 but to be sure you can file a bugzilla, and somebody will answer it
17:05 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:06 stickyboy Anyone use Solarflare 10GbE NICs?
17:06 nthomas joined #gluster
17:11 LHinson joined #gluster
17:12 PeterA i just commented on this bug
17:12 PeterA https://bugzilla.redhat.com/show_bug.cgi?id=1037511
17:12 glusterbot Bug 1037511: high, unspecified, ---, vbellur, NEW , Operation not permitted occurred during setattr of <nul>
17:14 bene2 sputnik13, for older NFS/RHS perf. results see http://rhsummit.files.wordpress.com/2013/07/england_th_0450_rhs_perf_practices-4_neependra.pdf slide 27, 28
17:15 balacafalata joined #gluster
17:17 zerick joined #gluster
17:18 sputnik13 bene2: thank you for the link, I'll take a look through it  :-)
17:19 dtrainor joined #gluster
17:20 dtrainor joined #gluster
17:26 Gue______ joined #gluster
17:28 chirino_m joined #gluster
17:29 PeterA i got this error 2014-08-27 04:00:02.710194] E [posix.c:4294:_posix_handle_xattr_keyvalue_pair] 0-sas03-posix: setxattr failed on /brick03/gfs/.glusterfs/cd/0d/cd0dd36c-c471-4777-8bd8-6415466434e3 while doing xattrop: key=trusted.glusterfs.quota.04bfaa62-f38c-4e4f-a962-42e7a3012b26.contri (No such file or directory)
17:29 PeterA how can the .glusterfs complains?
17:29 PeterA something corrupted??
17:58 LHinson1 joined #gluster
17:59 xleo joined #gluster
18:00 theron joined #gluster
18:01 lalatenduM joined #gluster
18:03 nishanth joined #gluster
18:11 recidive joined #gluster
18:15 LHinson joined #gluster
18:21 VerboEse joined #gluster
18:21 srjhu joined #gluster
18:27 daxatlas_ joined #gluster
18:34 [o__o] joined #gluster
18:37 _dist joined #gluster
18:41 theron joined #gluster
18:47 bene2 joined #gluster
19:02 plarsen joined #gluster
19:06 sputnik13 joined #gluster
19:07 anoopcs joined #gluster
19:17 PeterA1 joined #gluster
19:20 codex joined #gluster
19:21 recidive joined #gluster
19:36 cristov hi~ #gluster!  i have two peer and working with x1 brick each. this volume type is set to replica 2. i want to change volume type replica to disto-replica. what should i do?
19:38 rotbeard joined #gluster
19:40 cristov if i just add more bricks it will automatically change become a distributed replicate volume ?
19:42 semiosis yes
19:42 _dist cristov: I think so, but not the way you want maybe
19:43 semiosis you'll have to add bricks in replicated pairs
19:43 semiosis cristov: you'll also have to redistribute
19:52 sputnik13 joined #gluster
19:55 cristov semiosis: which one is right to understanding redistribute ? it is have to delete volume and remake? or need to rebalancing after add more bricks to volume?
19:56 semiosis yeah rebalance, thats what i mean
19:56 semiosis i knew redistribute wasnt right but couldnt remember the right word
19:57 cristov semiosis: thankyou :)
19:57 semiosis yw
19:59 anoopcs joined #gluster
20:06 B21956 joined #gluster
20:07 sputnik13 joined #gluster
20:21 zerick joined #gluster
20:25 systemonkey joined #gluster
20:57 todakure left #gluster
21:18 PeterA joined #gluster
21:20 Pupeno joined #gluster
21:22 ThatGraemeGuy joined #gluster
21:39 coredump joined #gluster
21:42 evanjfraser hi guys, I'm trying to setup my first test gluster cluster, but the documentation seems broken.  None of the links at: http://www.gluster.org/documentation/Getting_started_overview/ seem to work
21:42 glusterbot Title: Gluster (at www.gluster.org)
21:43 evanjfraser oh
21:43 evanjfraser the /comunity/ needs to be removed from the URL
21:54 bmikhael joined #gluster
21:55 semiosis evanjfraser: thx for the feedback
21:55 semiosis let us know if you run into any problems with your test setup
22:06 evanjfraser thanks semiosis, thats incredibly proactive and I appreciate it :)
22:07 evanjfraser I am co-incidentally having a problem
22:07 evanjfraser on Fedora 20, with VM's,
22:07 evanjfraser I'm unable to peer probe from either VM to the other
22:08 evanjfraser "Connection failed. Please check if gluster daemon is operational."
22:08 evanjfraser I have checked iptables, it's showing allow all
22:08 evanjfraser the vms can ping each other and can contact each others SSH ports
22:09 evanjfraser I'm not sure what port the gluster peer probe command should be trying to connect on
22:09 evanjfraser or I would nc it.
22:10 Pupeno_ joined #gluster
22:10 PeterA what does these error means?
22:10 PeterA .2014-08-27 21:56:33.580521] E [posix-helpers.c:893:posix_handle_pair] 0-sas03-posix: /brick03/gfs/DevMordorHomeSata03//hcamara//custom_interim_reports//ospgroup/womanwithin/weekly_report/mtd/survey_questionstest.txt: key:trusted.glusterfs.dht.linkto error:File exists
22:10 PeterA 2014-08-27 21:56:33.580538] E [posix.c:1177:posix_mknod] 0-sas03-posix: setting xattrs on /brick03/gfs/DevMordorHomeSata03//hcamara//custom_interim_reports//ospgroup/womanwithin/weekly_report/mtd/survey_questionstest.txt failed (File exists)
22:11 PeterA seems like this bug?
22:11 PeterA https://bugzilla.redhat.com/show_bug.cgi?id=1030200
22:11 glusterbot Bug 1030200: medium, unspecified, ---, rhs-bugs, NEW , DHT : file rename operation is successful but log has error 'key:trusted.glusterfs.dht.linkto error:File exists' , 'setting xattrs on <old_filename> failed (File exists)'
22:29 nbvfuel joined #gluster
22:34 nbvfuel Hi-- we're in the process of moving a single webserver (shared everything app) to a loadbalanced multi-webserver setup.
22:34 glusterbot nbvfuel: Hi's karma is now -1
22:35 nbvfuel We've tested lsyncd and csync2 for replicating files back and forth between 2 webservers and it seems to work well enough.  The issue is when we need to add a third web server.
22:37 nbvfuel Gluster seems to be like a more sensible (and less hacky) route, but I was hoping to get someone more knowledgable to beat up our deployment strategy.  That is (at least from the start) two webservers with nginx, will also be gluster nodes.  If we add a third web server, it will strictly be a client.
22:37 bmikhael joined #gluster
22:37 Pupeno joined #gluster
22:40 coredump joined #gluster
22:45 drajen_ joined #gluster
22:46 evanjfraser hmm, my problem might be an selinux thing
22:53 LandenAuer joined #gluster
22:55 semiosis evanjfraser: see ,,(ports)
22:55 glusterbot evanjfraser: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
22:55 semiosis it's usually iptables or selinux
22:55 semiosis or sometimes an ip conflict
22:55 semiosis (assuming you're probing the right address)
22:56 evanjfraser thanks semiosis
22:56 semiosis yw
22:56 gildub joined #gluster
22:56 evanjfraser yup, its the right address :), I'll check port availability in a min, but disabling selinux now
22:56 recidive joined #gluster
23:13 chirino joined #gluster
23:17 bmikhael joined #gluster
23:23 bennyturns joined #gluster
23:24 Pupeno joined #gluster
23:24 evanjfraser semiosis: just feedback, disabling selinux did the trick
23:24 evanjfraser thanks
23:28 evanjfraser ooh, i like the interactive gluster shell
23:28 bennyturns joined #gluster
23:32 Pupeno joined #gluster
23:32 systemonkey joined #gluster
23:34 evanjfraser hi there,
23:34 evanjfraser regarding mounting the glusterfs
23:34 evanjfraser I'm a bit confused, the mount command specifies to use the first server as the mount source
23:35 evanjfraser does this mean that any client traffic will always go via that server? or does gluster figure it out and balance things once its mounted?
23:35 evanjfraser easy for me to test I gues
23:41 evanjfraser ah yes, failover works great
23:41 evanjfraser nice!
23:42 sputnik13 joined #gluster
23:50 nbvfuel evanjfraser: Failover works, in that the client knows to send the request to a different peer?
23:50 evanjfraser yeah
23:51 evanjfraser If understand it, the IP/hostname  in the mount command is only used at mount time.
23:51 evanjfraser the fuse module takes over from there
23:51 evanjfraser similar to GPFS I guess
23:51 nbvfuel Very cool-- thanks for confirming
23:51 glusterbot nbvfuel: cool's karma is now -1
23:51 evanjfraser lol bot
23:52 evanjfraser nbvfuel++
23:52 glusterbot evanjfraser: nbvfuel's karma is now 1
23:52 nbvfuel Need to get out of the habit of double dashes :/
23:52 evanjfraser its a bit stupid that it lets you up/down vote yourself haha

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary