Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 xoritor ok... so maybe a distrepl setup would work best
00:00 JoeJulian That's usually the case.
00:00 xoritor but damn i feel left out in the lurch with a replica of 3
00:01 recidive joined #gluster
00:01 xoritor it may not be the case... but i sure do feel that way
00:01 JoeJulian You can do 4... 99.99999999(ish) uptime.
00:02 JoeJulian That's a few milliseconds a year.
00:02 xoritor technically i only need (as per my sla) about 80% uptime
00:02 xoritor but i can not deliver that low
00:02 xoritor not in my genetic makeup
00:03 JoeJulian replica 3 is like 5 nines.
00:03 xoritor yea
00:03 JoeJulian and you take a performance his on lookup() the more replicas you have.
00:03 xoritor true
00:04 xoritor i can probably do a replica 2
00:04 fubada JoeJulian: I just decided to mv /appdata/bricks/splunk_shp/* to another location, and started glusterd again
00:04 fubada is that allowed?
00:04 fubada Brick gls002:/appdata/bricks/splunk_shpN/ANN/A
00:04 fubada path didnt change
00:04 Pupeno joined #gluster
00:04 fubada just medium
00:05 JoeJulian sure
00:05 fubada do the attributes survive mv?
00:05 JoeJulian yes
00:06 fubada http://fpaste.org/130556/70282214/
00:07 glusterbot Title: #130556 Fedora Project Pastebin (at fpaste.org)
00:08 JoeJulian What was your process?
00:10 fubada JoeJulian:  http://fpaste.org/130557/97030041/
00:10 glusterbot Title: #130557 Fedora Project Pastebin (at fpaste.org)
00:10 fubada basically moved the files to another device
00:10 fubada and renamed the mount point
00:10 fubada and remounted
00:11 JoeJulian Ah, missed the directory attributes that way. Not hard to fix.
00:11 fubada volume start: splunk_shp: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /appdata/bricks/splunk_shp. Reason : No data available
00:12 JoeJulian setfattr -n trusted.glusterfs.volume-id -v 0xa049f0c1c14e4bfd8025696a3393b0ae /appdata/bricks/splunk_shp
00:12 JoeJulian setfattr -n trusted.glusterfs.dht -v 0x000000010000000000000000ffffffff /appdata/bricks/splunk_shp
00:13 JoeJulian and finally
00:13 fubada volume start: splunk_shp: failed: Volume id mismatch for brick gls002:/appdata/bricks/splunk_shp. Expected volume id 1c8ed545-0900-49e4-8e5d-ca39f990e484, volume id a049f0c1-c14e-4bfd-8025-696a3393b0ae found
00:13 fubada oops
00:13 JoeJulian Oh, it's a different volume. I thought we were still looking at splunk_mbdl
00:14 JoeJulian "getfattr -m . -d -e hex /appdata/bricks/splunk_shp" on the other server and set this one the same (except for trusted.afr.* with will be all zeros)
00:14 Pupeno joined #gluster
00:14 fubada volume start: splunk_shp: failed: Failed to get extended attribute trusted.glusterfs.volume-id for brick dir /appdata/bricks/splunk_shp. Reason : Numerical result out of range
00:15 JoeJulian did you miss the "-e hex" part?
00:20 msbrogli joined #gluster
00:33 msbrogli joined #gluster
00:34 JoeJulian JustinClift: Do you know where upstream the patch for bug 1051226 was applied? 'cause I can't find it on the forge.
00:34 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1051226 high, high, RHS 2.1.2, ira, CLOSED ERRATA, Wrong atime updates on a file during write() while using vfs glusterfs CIFS interface
00:37 fubada JoeJulian: thank you for all of your help today
00:37 JoeJulian You're welcome. :D
00:40 JoeJulian ndevos: same question regarding bug 1051226. Where's the upstream patch?
00:40 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1051226 high, high, RHS 2.1.2, ira, CLOSED ERRATA, Wrong atime updates on a file during write() while using vfs glusterfs CIFS interface
00:46 bala joined #gluster
00:51 msbrogli joined #gluster
00:55 msbrogli joined #gluster
00:58 msbrogli joined #gluster
01:08 jmarley joined #gluster
01:18 bala joined #gluster
01:29 gildub joined #gluster
01:34 vimal joined #gluster
01:43 recidive joined #gluster
02:03 7JTAAQ7XY joined #gluster
02:04 sputnik13 joined #gluster
02:10 sputnik13 joined #gluster
02:17 harish_ joined #gluster
02:19 haomaiwang joined #gluster
02:28 recidive joined #gluster
02:37 haomaiwa_ joined #gluster
02:56 an joined #gluster
03:02 Pupeno joined #gluster
03:07 aravindavk joined #gluster
03:29 glusterbot New news from newglusterbugs: [Bug 1136221] The memories are exhausted quickly when handle the message which has multi fragments in a single record <https://bugzilla.redhat.com/show_bug.cgi?id=1136221>
03:53 itisravi joined #gluster
03:54 ndarshan joined #gluster
03:59 shubhendu joined #gluster
04:00 nbalachandran joined #gluster
04:02 kshlm joined #gluster
04:09 harold2 joined #gluster
04:18 harold2 left #gluster
04:28 Pupeno joined #gluster
04:32 dusmant joined #gluster
04:33 Rafi_kc joined #gluster
04:33 rafi1 joined #gluster
04:34 anoopcs joined #gluster
04:34 bala joined #gluster
04:45 RameshN joined #gluster
04:52 jiffin joined #gluster
04:53 atinmu joined #gluster
04:55 kanagaraj joined #gluster
04:59 ppai joined #gluster
05:04 necrogami joined #gluster
05:09 hagarth joined #gluster
05:13 saurabh joined #gluster
05:14 ninkotech_ joined #gluster
05:14 ninkotech joined #gluster
05:14 ninkotech__ joined #gluster
05:23 kdhananjay joined #gluster
05:30 atalur joined #gluster
05:32 prasanth_ joined #gluster
05:35 deepakcs joined #gluster
05:36 meghanam joined #gluster
05:37 meghanam_ joined #gluster
05:37 dmyers does anyone know the right volume setup on aws instances? im confused on root volume with ebs and instance store volumes
05:45 raghu joined #gluster
05:47 rjoseph joined #gluster
05:52 JustinClift JoeJulian: Good question about BZ 1051226.  /me hopes it wasn't accidentally missed from upstream
05:54 JustinClift JoeJulian: Keep asking ndevos I guess, or maybe Harsha as well.  If no luck, as on the -devel mailing list. :)
05:55 JustinClift s/as on/ask on/
05:55 glusterbot JustinClift: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
06:00 Philambdo joined #gluster
06:00 lalatenduM joined #gluster
06:04 an joined #gluster
06:05 an_ joined #gluster
06:05 nshaikh joined #gluster
06:06 atalur joined #gluster
06:10 ghenry_ joined #gluster
06:10 kdhananjay joined #gluster
06:12 meghanam_ joined #gluster
06:13 meghanam joined #gluster
06:22 dusmant joined #gluster
06:25 an joined #gluster
06:28 haomaiw__ joined #gluster
06:31 glusterbot New news from newglusterbugs: [Bug 1136702] Add a warning message to check the removed-bricks for any files left post "remove-brick commit" <https://bugzilla.redhat.com/show_bug.cgi?id=1136702>
06:42 rgustafs joined #gluster
06:44 elico joined #gluster
06:45 meghanam_ joined #gluster
06:45 rgustafs joined #gluster
06:45 meghanam joined #gluster
06:53 jvandewege joined #gluster
06:56 jiku joined #gluster
06:57 mick27 joined #gluster
07:00 jtux joined #gluster
07:00 mick271 joined #gluster
07:01 ndevos JoeJulian, JustinClift: that went upstream as http://git.samba.org/?p=samba.git;a=commitdiff;h=276c161 , I dont know how syncing with the forge is done, if at all?
07:01 glusterbot Title: git.samba.org - samba.git/commitdiff (at git.samba.org)
07:03 deepakcs joined #gluster
07:10 anands joined #gluster
07:17 getup- joined #gluster
07:18 nbalachandran joined #gluster
07:18 rjoseph joined #gluster
07:19 atalur joined #gluster
07:20 calum_ joined #gluster
07:21 fsimonce joined #gluster
07:22 kdhananjay joined #gluster
07:25 dusmant joined #gluster
07:26 kumar joined #gluster
07:29 an_ joined #gluster
07:51 Thilam hi
07:51 glusterbot Thilam: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:51 Thilam As some of you knows, I'm experiencing glusterfs in a prod env. since 2/3 months
07:51 Thilam I encounter problems regarding small files management
07:51 Thilam the system is very slow
07:52 Thilam for example my backup took 3 times the time they usualy did
07:52 vimal joined #gluster
07:53 Thilam is there some specific tuning to do ?
07:58 andreask joined #gluster
07:59 nbalachandran joined #gluster
08:04 ricky-ti1 joined #gluster
08:18 liquidat joined #gluster
08:19 dtrainor_ joined #gluster
08:34 dusmant joined #gluster
08:40 atalur joined #gluster
08:42 kdhananjay joined #gluster
08:55 R0ok_ joined #gluster
08:57 richvdh joined #gluster
09:00 osiekhan joined #gluster
09:06 bala joined #gluster
09:10 dmyers Thilam: im looking ar tuning small files too. ive been told it doesnt work well though
09:10 dmyers have you increased the performance.cache_size?
09:15 nbalachandran joined #gluster
09:19 jmarley joined #gluster
09:21 kdhananjay joined #gluster
09:26 meghanam joined #gluster
09:27 meghanam_ joined #gluster
09:27 dmyers im having trouble with a site that the browser is saying "wiating for $IP" forever using glusterfs over a mount and symlink with php. has anyone else had that happen?
09:30 Pupeno joined #gluster
09:32 Thilam dmyers : yes I have increased the performance.cache_size to 1GB
09:32 glusterbot New news from newglusterbugs: [Bug 1136769] AFR: Provide a gluster CLI for automated resolution of split-brains. <https://bugzilla.redhat.com/show_bug.cgi?id=1136769>
09:33 Thilam I don't know if it's enough. Is there a way to know the real time usage of the cache ?
09:36 Thilam joined #gluster
09:39 kshlm joined #gluster
09:39 dmyers hmmm im not sure but that would be helpful to me too
09:39 RameshN joined #gluster
09:45 atalur joined #gluster
09:49 bala joined #gluster
09:59 dusmant joined #gluster
09:59 wgao joined #gluster
10:04 sputnik13 joined #gluster
10:13 _NiC joined #gluster
10:22 meghanam_ joined #gluster
10:22 meghanam joined #gluster
10:47 giannello joined #gluster
10:49 qdk joined #gluster
10:52 ccha2 joined #gluster
10:58 getup- joined #gluster
10:58 nbalachandran joined #gluster
10:58 calum_ joined #gluster
11:01 xleo joined #gluster
11:14 ira joined #gluster
11:17 ppai joined #gluster
11:23 mojibake joined #gluster
11:31 FarbrorLeon joined #gluster
11:32 FarbrorLeon joined #gluster
11:32 glusterbot New news from newglusterbugs: [Bug 1136810] Inode leaks upon repeated touch/rm of files in a replicated volume <https://bugzilla.redhat.com/show_bug.cgi?id=1136810>
11:34 todakure joined #gluster
11:44 kshlm joined #gluster
11:45 ricky-ti1 joined #gluster
11:50 diegows joined #gluster
11:50 pradeepto joined #gluster
11:52 firemanxbr joined #gluster
11:53 LebedevRI joined #gluster
11:56 chirino joined #gluster
12:00 JustinClift *** Weekly GlusterFS Community Meeting is starting NOW in #gluster-meeting on irc.freenode.net ***
12:02 jdarcy joined #gluster
12:03 glusterbot New news from newglusterbugs: [Bug 1136822] creating special files like device files is leading to pending data changelog when one of the brick is down <https://bugzilla.redhat.com/show_bug.cgi?id=1136822> || [Bug 1136823] Find | xargs stat leads to mismatching gfids on files without gfid <https://bugzilla.redhat.com/show_bug.cgi?id=1136823> || [Bug 1136825] Metadata sync in self-heal is not happening inside metadata locks <https://bugzilla.redh
12:04 ppai joined #gluster
12:05 meghanam_ joined #gluster
12:05 meghanam joined #gluster
12:06 hagarth joined #gluster
12:08 atinmu joined #gluster
12:16 mariusp joined #gluster
12:18 itisravi joined #gluster
12:18 dusmant joined #gluster
12:27 B21956 joined #gluster
12:30 kanagaraj joined #gluster
12:33 glusterbot New news from newglusterbugs: [Bug 1136835] crash on fsync <https://bugzilla.redhat.com/show_bug.cgi?id=1136835>
12:39 rjoseph joined #gluster
12:41 LHinson joined #gluster
12:42 altmariusp joined #gluster
12:54 todakure joined #gluster
12:55 todakure someone, see that book ? [ https://www.packtpub.com/web-development/implementing-samba-4 ] does have a example of setup a HA with glusterfs and CTDB :D
12:55 glusterbot Title: Implementing Samba 4 | Packt (at www.packtpub.com)
13:02 dusmant joined #gluster
13:10 mariusp joined #gluster
13:18 kanagaraj joined #gluster
13:20 nbalachandran joined #gluster
13:22 sputnik13 joined #gluster
13:24 julim joined #gluster
13:28 recidive joined #gluster
13:31 peema joined #gluster
13:34 lalatenduM joined #gluster
13:34 anoopcs joined #gluster
13:38 tdasilva joined #gluster
13:41 gmcwhistler joined #gluster
13:44 jmarley joined #gluster
13:44 coredump joined #gluster
13:45 bennyturns joined #gluster
13:46 bala joined #gluster
13:50 anoopcs joined #gluster
13:51 mariusp joined #gluster
13:54 plarsen joined #gluster
13:58 bit4man joined #gluster
14:00 rwheeler joined #gluster
14:01 harish_ joined #gluster
14:02 xleo joined #gluster
14:03 xleo joined #gluster
14:03 bfoster joined #gluster
14:06 dusmant joined #gluster
14:11 anands left #gluster
14:12 kshlm joined #gluster
14:18 coredump joined #gluster
14:19 theron joined #gluster
14:20 wushudoin joined #gluster
14:23 harish joined #gluster
14:24 giannello joined #gluster
14:25 Pupeno_ joined #gluster
14:27 _Bryan_ joined #gluster
14:29 sputnik13 joined #gluster
14:34 zerick joined #gluster
14:35 xleo joined #gluster
14:42 jobewan joined #gluster
14:50 LHinson joined #gluster
14:51 bet_ joined #gluster
14:53 John_HPC joined #gluster
14:56 John_HPC It seems one of my replicas still is showing a lot of gfids as needing to be healed. However, nothing seems to be showing the files are in need of healing; there are no split brains, failed heals, etc...
14:56 John_HPC http://paste.ubuntu.com/8224142/
14:56 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
14:59 peema This may seem like a stupid question, but what's the distinction between production and stable? If I wanted to go live in the near/medium term should I be looking to 3.4 or 3.5?
15:00 John_HPC peema, I would go with 3.5; so far I've been updated at each release and its been fine. I don't see a reason to stay at 3.4
15:01 peema Okay, cheers.
15:01 John_HPC stable = no more development; production = current version
15:02 John_HPC stable may have some security fixes, but won't be putting new functionality in
15:02 hagarth joined #gluster
15:03 peema Given that, is there more documentation on geo-replication in 3.5 than https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md or should I be mining the mailing list and/or source?
15:03 glusterbot Title: glusterfs/admin_distributed_geo_rep.md at master · gluster/glusterfs · GitHub (at github.com)
15:04 peema The particular use case I'm looking at is where I've expanded the volumes (master and slave) after replication is established.
15:05 jdarcy joined #gluster
15:05 John_HPC peema, unfortunately I don't know that. I haven't touched the geo-replication.
15:06 peema Sure, fair enough.
15:12 bene2 joined #gluster
15:13 sputnik13 joined #gluster
15:16 djgiggle joined #gluster
15:17 deeville joined #gluster
15:19 skippy where can I find details on the max_read mount value?  What is it?  Can I change it?  And would I want to?
15:19 djgiggle Hi, is it feasible to use my web server as a glusterfs server and another server (with bigger space) to serve as a peer, or should I just set the web server as a peer and get the server with bigger space to be the glusterfs server?
15:22 dtrainor joined #gluster
15:22 rotbeard joined #gluster
15:22 an joined #gluster
15:22 dtrainor joined #gluster
15:23 georgeh joined #gluster
15:24 georgeh_ joined #gluster
15:24 georgeh joined #gluster
15:24 georgeh left #gluster
15:25 Pupeno joined #gluster
15:33 ricky-ti2 joined #gluster
15:33 RameshN joined #gluster
15:36 giannello joined #gluster
15:37 lalatenduM joined #gluster
15:52 lmickh joined #gluster
16:01 Spiculum joined #gluster
16:02 JoeJulian djgiggle: What's your end goal?
16:05 wushudoin| joined #gluster
16:05 B219561 joined #gluster
16:05 eclectic joined #gluster
16:06 SteveCooling joined #gluster
16:06 saltsa_ joined #gluster
16:06 Debolaz joined #gluster
16:06 Debolaz joined #gluster
16:06 Diddi_ joined #gluster
16:06 fyxim__ joined #gluster
16:08 d-fence joined #gluster
16:08 Pupeno_ joined #gluster
16:08 siel joined #gluster
16:08 ws2k3 joined #gluster
16:08 Andreas-IPO joined #gluster
16:08 siel joined #gluster
16:09 and` joined #gluster
16:09 dtrainor_ joined #gluster
16:09 mibby joined #gluster
16:09 kalzz joined #gluster
16:09 lezo__ joined #gluster
16:09 Bardack joined #gluster
16:09 verdurin_ joined #gluster
16:09 masterzen joined #gluster
16:10 AaronGr joined #gluster
16:10 capri joined #gluster
16:15 toordog-work joined #gluster
16:18 xleo joined #gluster
16:19 sputnik13 joined #gluster
16:28 swebb joined #gluster
16:40 hagarth joined #gluster
16:43 sputnik13 joined #gluster
16:47 anoopcs joined #gluster
16:52 booooooby joined #gluster
16:53 sputnik13 joined #gluster
16:54 John_HPC_ joined #gluster
16:55 booooooby Hello gluster people, this maybe a stupid question but I just want to make sure I am not missing out on something: I have 4 gluster servers (2 bricks on each) and each brick is create using LVM. When I expand the Brick by adding a disk to the LVM gluster seems to detect the extra space and the size of gluster grows also. So am I right to assume tha
16:55 booooooby t nothing need to be done to gluster? that just adding extra storage to the bricks is enough?
16:56 swebb left #gluster
16:56 Pupeno joined #gluster
16:56 LHinson joined #gluster
17:02 booooooby anybody ?
17:03 marcoceppi joined #gluster
17:05 John_HPC_ booooooby, Not to my knowledge. You should be good.
17:06 PeterA joined #gluster
17:06 booooooby gluster is smarter than i thought :)
17:07 toordog-work booooooby when you add a disk to your LVM, the exact step are : resize your logical volume and your filesystem *XFS* ?
17:08 semiosis lvextend & xfs_grow
17:08 cristov joined #gluster
17:14 booooooby semiosis : vgextend lvextend xfs_growfs
17:14 booooooby thats what I do
17:14 semiosis booooooby: pvcreate vgextend lvextend xfs_growfs.  checkmate
17:14 marcoceppi joined #gluster
17:14 semiosis ;)
17:14 booooooby yes pvcreate before :D
17:17 coredump joined #gluster
17:20 skippy can you not just pass "-r" to the lvextend to grow the fs?  Does that not work with XFS?
17:22 booooooby skippy: didn't know lvextend had a -r option
17:24 sputnik13 joined #gluster
17:26 skippy i've used `lvextend -r` with ext3 filesystems with great success. never tried with XFS.  I'm assuming it works fine.
17:29 _dist joined #gluster
17:31 marcoceppi joined #gluster
17:31 LHinson1 joined #gluster
17:31 barnim joined #gluster
17:32 _Bryan_ joined #gluster
17:33 neofob joined #gluster
17:36 booooooby left #gluster
17:38 gmcwhistler joined #gluster
17:43 mariusp joined #gluster
17:47 luckyinva joined #gluster
17:48 simulx2 joined #gluster
17:53 Philambdo joined #gluster
18:04 Rafi_kc joined #gluster
18:06 elico joined #gluster
18:06 glusterbot New news from newglusterbugs: [Bug 1130023] [RFE] Make I/O stats for a volume available at client-side <https://bugzilla.redhat.com/show_bug.cgi?id=1130023>
18:13 Pupeno joined #gluster
18:22 diegows joined #gluster
18:29 Rafi_kc joined #gluster
18:37 PeterA1 joined #gluster
18:38 Rafi_kc joined #gluster
18:39 bennyturns joined #gluster
18:40 PeterA joined #gluster
18:42 Pupeno joined #gluster
18:45 Rafi_kc joined #gluster
18:45 ghenry_ joined #gluster
18:49 Philambdo joined #gluster
18:55 rafi1 joined #gluster
18:55 Rafi_kc joined #gluster
19:00 bennyturns joined #gluster
19:17 Pupeno joined #gluster
19:25 luckyinva joined #gluster
19:39 mick271 joined #gluster
19:42 altmariusp joined #gluster
19:46 neofob left #gluster
19:49 siel joined #gluster
19:59 toordog-work what is the difference between gluster volume replace-brick VOLNAME BRICK NEW-BRICK start and gluster volume replace-brick VOLNAME BRICK NEW-BRICK commit ?
19:59 toordog-work is it that  after the migration you must tell the cluster that the new brick is replacing the old one ?
20:00 semiosis start moves the data, commit removes the previous brick, iirc
20:00 toordog-work is the old brick need to be decommission?  *remove-brick*
20:01 semiosis you should try it on a test volume
20:01 semiosis that's really the best way to understand
20:02 toordog-work I will, but i just wanted to get the rough idea before attempting
20:02 toordog-work always better to know more or less where you are going than completly blind :)
20:07 peema joined #gluster
20:12 NigeyS joined #gluster
20:12 NigeyS evening :)
20:13 bene2 joined #gluster
20:15 NigeyS anyone know if havin home directories inside a gluster mount would cause problems with SFTP? i've hit a brick wall :/
20:21 semiosis shouldn't matter
20:21 semiosis check logs
20:21 semiosis put the logs on pastie.org
20:25 Pupeno joined #gluster
20:26 NigeyS semiosis will do, ive set sftp exactly as it is on the current servers and i just get connection refused, and an error 255 in syslog, no real explination..
20:27 NigeyS http://pastie.org/9525124
20:27 glusterbot Title: #9525124 - Pastie (at pastie.org)
20:28 NigeyS test is the user i created in /home/sites which is the gluster mount
20:28 Pupeno_ joined #gluster
20:40 asku joined #gluster
20:43 semiosis connection refused?  as in TCP RST?  are you sure the sftp daemon is running?
20:44 dmyers joined #gluster
20:44 NigeyS yup, if i keep the sftp block i added to sshd_config and restart, it refuses connection
20:44 dmyers joined #gluster
20:45 NigeyS [97110.232938] init: ssh main process (28959) terminated with status 255
20:45 NigeyS that being the error it spits out
20:45 NigeyS not very helpful
20:45 NigeyS ello dmyers
20:46 semiosis can you log in with a user whose home directory is not on glusterfs?
20:46 semiosis lets try to isolate if this problem has anything to do with gluster (because i doubt it does)
20:47 dmyers hey NigeyS
20:47 dmyers i played around with glusterfs on m1.large ec2 yesterday
20:47 NigeyS no, it totally shuts down ssh even though its still showing as running, if i remove the sftp block, i can get back in just fine
20:48 NigeyS dmyers how did you find it ?
20:48 dmyers i was testing on a web server client mounted with it and each of the pages served from it each had "waiting for hostname..." so im hoping to try again today
20:48 NigeyS eek
20:48 dmyers the ami i had had mounted the instance store to /mnt and i mounted an ebs volume to /mnt/ebs - im wondering if that was the problem
20:49 dmyers or using ext4 which i read (old post) wasnt as performant as ext3? or maybe use xfs?
20:50 skippy I have two servers, each with one brick in a replica 2 volume. I want to add a 3rd server with a brick for replica 3.  That worked.  But the new brick is empty and rebalance says "not a distribute volume or contains only 1 brick."
20:50 skippy how can I correctly get all the data from the existing bricks on to the new brick?
20:52 dmyers skippy: did you peer the new server and then add the brick to the volume?
20:52 skippy i did
20:52 dmyers oh hm im not sure i haven't gotten farther than that i thought it would be automatic
20:53 dmyers NigeyS: were you setting up users home dir to sync to your gluster? i was kind of wanting to do that and the hosts file but wasnt sure if it was an okay practice
20:54 dmyers i think my old job did that at least with the home dir thing as i could ssh into any server when i had my public key setup once, im not sure how you do it but it was pretty awesome
20:55 NigeyS dmyers well, weve got our websites in /home/sites which is the gluster mount, each user has a folder in /home/sites say test.com we so home dir is /home/sites/test.com etc works ok on the old system, but atm it's giving me quite a bit of touble when setting up SFTP
20:55 zerick joined #gluster
20:56 Pupeno joined #gluster
20:56 gmcwhist_ joined #gluster
20:59 dmyers ohhh i gotch hmm
21:02 NigeyS hmm semiosis think i found a solution, if i set Pam to no in sshd_conf it restarts fine and i can sftp, but my key based logins no longer work :|
21:02 semiosis hmmm
21:03 NigeyS my thoughts exactly
21:04 Pupeno_ joined #gluster
21:06 dmyers NigeyS:  have you gotten a web server to serve files from your /home/sites/test.com?
21:06 dmyers wondering if you have had the "waiting for test.com...." in the browser like i have, in chrome devtools it says 500 server error but the pages work fine but hang on forever
21:07 coredump joined #gluster
21:09 skippy dmyers: interesting fact:  adding a new replica brick does not immediately sync the data.  But accessing the data from one of the existing bricks seems to trigger it.
21:09 skippy I did an `ls` on the mounted volume and the files started showing up in the new brick.
21:09 skippy Seems somewhat magic, and unintuitive, but ... hey, it wosk.
21:09 skippy works
21:10 toordog-work skippy when you added your brick, did you rebalance your data?
21:10 _Bryan_ joined #gluster
21:10 skippy toordog-work: I tried, but this is a straight replica, not distribute.
21:10 toordog-work ok
21:10 skippy replica 3.   3 servers, each with one brick.
21:10 toordog-work i though the rebalance had to be done distributed or not
21:10 toordog-work I'm still confuse about what is the difference between distribute and not since the command is exactly the same
21:10 skippy volume rebalance: testvol: failed: Volume testvol is not a distribute volume or contains only 1 brick.
21:11 toordog-work ok
21:11 semiosis skippy: what version of glusterfs are you using?
21:11 NigeyS dmyers soon as i fix this sftp issue ill be ready to test a website
21:11 skippy semiosis: glusterfs 3.5.2 built on Jul 31 2014 18:47:54
21:11 dmyers skippy: oh i gotcha hm
21:11 semiosis toordog-work: rebalance is strictly a distribute thing
21:12 toordog-work semiosis what is the difference between distribute and not distribute?
21:12 semiosis skippy: you might have tried a 'gluster volume heal full'
21:12 semiosis (something like that)
21:12 skippy the new brick does not contain all the files.  Only as files are accessed from the volume do they show up on the new brick
21:13 skippy ah, heal full is doing stuff.
21:13 skippy heal (without full) didn't do anything
21:15 skippy thanks, semiosis.  Didn't find that documented anywhere in the "expand volumes" pages.
21:17 semiosis toordog-work: distribute divides files among bricks or replica sets, so if your volume is replica 2 and has only two bricks, then each brick has all the files.  if you had a replica 2 volume with 4 bricks, then each brick would have half of all the files
21:19 qdk joined #gluster
21:21 Pupeno joined #gluster
21:23 NigeyS semiosis looks like i've fixed it :|
21:23 semiosis congrats. how?
21:23 NigeyS moved the sftp block to the very end of the file
21:23 semiosis a ha!
21:23 NigeyS it shouldnt have worked, but it does :|
21:24 dmyers woot
21:24 NigeyS i wont get to excited, i have to get this working nicely with ldap next..lol
21:27 NigeyS dmyers if your around in about 30 minutes i should be ready to test a website on gluster
21:27 toordog-work semiosis and in a distributed stripe how would it be?
21:31 dmyers sweet
21:32 semiosis toordog-work: ,,(stripe)
21:32 glusterbot toordog-work: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
21:34 dmyers NigeyS: ill be back in a bit if you're still on :D
21:35 NigeyS oki no probs, if i miss you i'll pop on tomorrow :)
21:43 toordog-work got it, but even so, there is always the case of let say a popular file in a website that is always more accessed, this would benefit from being stripe
21:44 toordog-work but i still have lot of experiment to do before I understand fully
21:46 diegows joined #gluster
21:47 semiosis toordog-work: it would benefit from front-end caching!
21:47 semiosis toordog-work: you're talking about small files, so stripe is not a good fit.
21:47 semiosis stripe is for huuuuuuuge files
21:52 xleo joined #gluster
22:16 toordog-work alright
22:16 toordog-work so i will probably need more distributed replicate
22:16 toordog-work as i can read and feedback :)
22:20 Pupeno_ joined #gluster
22:22 semiosis most people use distributed-replicated (or just replicated)
22:26 toordog-work is it better to have small bricks or doesn't matter?
22:28 semiosis depends on your needs, but in general i prefer more, smaller bricks.  they heal faster & can give better performance due to parallelism
22:36 sputnik13 joined #gluster
22:53 Pupeno joined #gluster
23:06 cmtime joined #gluster
23:06 recidive joined #gluster
23:12 sputnik13 joined #gluster
23:13 elico joined #gluster
23:29 purpleidea semiosis: and there are 8TB drives now :)
23:34 avati joined #gluster
23:35 gmcwhistler joined #gluster
23:39 JoeJulian purpleidea: I know! And I'm going to have some to play with very soon.
23:48 dmyers NigeyS: still on?
23:49 Zordrak joined #gluster
23:49 Zordrak joined #gluster
23:55 Norky joined #gluster
23:56 ghenry joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary