Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 calavera joined #gluster
00:11 CyrilPeponnet joined #gluster
00:11 plarsen joined #gluster
00:29 shyam joined #gluster
00:55 calavera joined #gluster
01:14 dbruhn joined #gluster
01:34 suliba joined #gluster
01:45 PaulCuzner joined #gluster
01:54 calavera joined #gluster
02:12 nangthang joined #gluster
02:21 calavera joined #gluster
02:25 harish joined #gluster
02:28 Lee-- joined #gluster
02:41 dgandhi joined #gluster
02:45 bharata-rao joined #gluster
03:19 kdhananjay joined #gluster
03:24 vmallika joined #gluster
03:28 dusmant joined #gluster
03:28 auzty joined #gluster
03:33 aravindavk joined #gluster
03:36 atinm joined #gluster
03:38 shubhendu joined #gluster
03:44 [7] joined #gluster
03:46 sakshi joined #gluster
03:47 overclk joined #gluster
03:52 nbalacha joined #gluster
03:56 aezy joined #gluster
03:57 ramky joined #gluster
03:59 sakshi joined #gluster
04:01 plarsen joined #gluster
04:04 nishanth joined #gluster
04:05 aezy anyone about whos played with the gluster samba vfs and had samba panic with it?
04:08 gem joined #gluster
04:08 kanagaraj joined #gluster
04:19 RameshN joined #gluster
04:23 calisto joined #gluster
04:26 yazhini joined #gluster
04:26 dewey joined #gluster
04:27 jwd joined #gluster
04:28 ppai joined #gluster
04:30 jwaibel joined #gluster
04:36 deepakcs joined #gluster
04:47 ramteid joined #gluster
04:48 vimal joined #gluster
04:53 aravindavk joined #gluster
04:56 ndarshan joined #gluster
04:58 aravindavk joined #gluster
04:59 kotreshhr joined #gluster
05:00 jiffin joined #gluster
05:05 elico joined #gluster
05:08 ppai joined #gluster
05:10 maveric_amitc_ joined #gluster
05:20 lilliput joined #gluster
05:22 rafi joined #gluster
05:24 pppp joined #gluster
05:25 vmallika joined #gluster
05:27 hgowtham joined #gluster
05:27 ipmango joined #gluster
05:28 ashiq joined #gluster
05:36 jiffin joined #gluster
05:44 meghanam joined #gluster
05:49 jwd joined #gluster
05:56 kdhananjay joined #gluster
06:06 Saravana_ joined #gluster
06:06 kotreshhr1 joined #gluster
06:07 dusmant joined #gluster
06:17 harish joined #gluster
06:21 atalur joined #gluster
06:22 kdhananjay joined #gluster
06:23 vimal joined #gluster
06:30 hchiramm joined #gluster
06:38 dusmant joined #gluster
06:39 ramky joined #gluster
06:43 raghu joined #gluster
06:48 ramky joined #gluster
06:52 kovshenin joined #gluster
07:08 Slashman joined #gluster
07:11 nangthang joined #gluster
07:18 aravindavk joined #gluster
07:22 poornimag joined #gluster
07:25 Manikandan joined #gluster
07:29 Trefex joined #gluster
07:30 ctria joined #gluster
07:48 ppai joined #gluster
07:49 kdhananjay1 joined #gluster
07:51 fsimonce joined #gluster
07:53 aravindavk joined #gluster
07:53 ctria joined #gluster
07:58 poornimag joined #gluster
07:58 Norky joined #gluster
08:07 kotreshhr joined #gluster
08:12 neha joined #gluster
08:16 Pupeno joined #gluster
08:19 ppai joined #gluster
08:20 akay1 joined #gluster
08:27 harish joined #gluster
08:35 anil joined #gluster
08:40 ctria joined #gluster
08:45 gem joined #gluster
08:49 LebedevRI joined #gluster
09:00 ppai joined #gluster
09:05 rjoseph joined #gluster
09:08 ira joined #gluster
09:17 nishanth joined #gluster
09:31 neha joined #gluster
09:34 Bhaskarakiran joined #gluster
09:40 ndarshan joined #gluster
09:54 Lee1092 joined #gluster
09:56 PaulCuzner joined #gluster
09:58 vimal joined #gluster
10:05 gem joined #gluster
10:11 ndarshan joined #gluster
10:12 gem_ joined #gluster
10:15 Philambdo joined #gluster
10:17 muneerse2 joined #gluster
10:33 kkeithley1 joined #gluster
10:41 overclk joined #gluster
10:42 ppai joined #gluster
10:44 kotreshhr joined #gluster
10:54 firemanxbr joined #gluster
11:05 necrogami joined #gluster
11:07 ajames41678 joined #gluster
11:07 ajames_41678 joined #gluster
11:07 ppai joined #gluster
11:11 shyam joined #gluster
11:18 poornimag joined #gluster
11:26 moss left #gluster
11:33 autoditac joined #gluster
11:36 hchiramm joined #gluster
11:36 ramky joined #gluster
11:47 RameshN joined #gluster
11:50 sigurd joined #gluster
11:51 ahab joined #gluster
11:52 necrogami joined #gluster
11:52 ira joined #gluster
11:53 Guest45699 Good afternoon, evening or morning, depending on your timezone - Would this be a place where I could get some advice on a GlusterFS Cluster I am setting up?
11:55 kovshenin joined #gluster
12:00 kotreshhr joined #gluster
12:00 ramky joined #gluster
12:01 rafi1 joined #gluster
12:02 aravindavk joined #gluster
12:03 rafi joined #gluster
12:06 rjoseph joined #gluster
12:12 ppai joined #gluster
12:17 chirino joined #gluster
12:18 kdhananjay joined #gluster
12:23 autoditac joined #gluster
12:35 jcastill1 joined #gluster
12:38 jwd joined #gluster
12:39 stickyboy joined #gluster
12:40 jcastillo joined #gluster
12:41 rafi joined #gluster
12:44 ekuric joined #gluster
12:46 theron joined #gluster
12:48 DV joined #gluster
12:57 shyam joined #gluster
13:20 B21956 joined #gluster
13:25 hamiller joined #gluster
13:29 julim joined #gluster
13:34 aaronott joined #gluster
13:36 ppai joined #gluster
13:45 plarsen joined #gluster
13:48 Bhaskarakiran joined #gluster
13:49 jcastill1 joined #gluster
13:51 Twistedgrim joined #gluster
13:54 jcastillo joined #gluster
13:54 calavera joined #gluster
13:58 jcsp_ joined #gluster
13:58 jcsp_ hey -- fuse client question: do you guys have to disable the page cache to get correct behaviour?
13:58 jcsp_ we're looking at this in ceph right now
13:59 vimal joined #gluster
14:02 sage_ joined #gluster
14:07 aravindavk joined #gluster
14:10 autoditac joined #gluster
14:15 aravindavk joined #gluster
14:19 autoditac joined #gluster
14:19 calavera joined #gluster
14:22 ctria joined #gluster
14:23 autoditac_ joined #gluster
14:25 kotreshhr joined #gluster
14:25 kotreshhr left #gluster
14:30 Philambdo joined #gluster
14:31 autoditac joined #gluster
14:31 wushudoin joined #gluster
14:32 wushudoin joined #gluster
14:33 pppp joined #gluster
14:33 autoditac joined #gluster
14:33 kdhananjay joined #gluster
14:34 bennyturns joined #gluster
14:35 nsoffer joined #gluster
14:37 autoditac joined #gluster
14:39 neofob joined #gluster
14:48 squizzi_ joined #gluster
14:50 autoditac_ joined #gluster
14:56 nsoffer joined #gluster
14:59 _Bryan_ joined #gluster
14:59 Philambdo joined #gluster
15:00 dewey joined #gluster
15:04 cyberswat joined #gluster
15:08 theron_ joined #gluster
15:14 dgandhi joined #gluster
15:18 Philambdo1 joined #gluster
15:19 autoditac joined #gluster
15:23 bene2 joined #gluster
15:25 rafi joined #gluster
15:28 julim joined #gluster
15:40 calavera joined #gluster
15:41 shubhendu joined #gluster
15:52 cholcombe joined #gluster
15:54 Twistedgrim joined #gluster
15:54 nsoffer joined #gluster
15:56 papamoose1 joined #gluster
16:01 aravindavk joined #gluster
16:06 shyam1 joined #gluster
16:12 Arrfab joined #gluster
16:13 srepetsk joined #gluster
16:17 srepetsk weird question for you guys; just installed a new cent6.7 box & tried adding it to my gluster/ovirt cluster (cent6.6); the new server can only see 3/9 of the current cluster, yet all current 9 can see each other. new 6.7 box runs glusterfs-3.7.3-1, olds are a mixture of 3.6.2, 3.6.3, 3.7.0, 3.7.2
16:20 papamoose joined #gluster
16:20 ira joined #gluster
16:22 JoeJulian srepetsk: Apparently they changed 3.7.3 to use unprivileged ports. Your older versions will need to set "option rpc-auth-allow-insecure on" in /etc/glusterfs/glusterd.vol.
16:23 JoeJulian srepetsk: And that's a lot of version mixing. That's not a good thing.
16:26 srepetsk JoeJulian: ah, thanks; yeah i'm working on making the environment more uniform; is the concern the 3.6 and 3.7 mix or even 3.7.x mismatch?
16:29 JoeJulian Well.. in that list, 3.7.0 scares me. 3.6.<3 has cross-version issues. And there are features (esp performance) that you just can't use until you're all current.
16:29 srepetsk lol ok
16:29 srepetsk glad I asked, then
16:30 magamo JoeJulian: What was wrong with 3.7.0?
16:30 JoeJulian memory corruption
16:30 JoeJulian invalid pointers
16:30 magamo Mmm, fun.  Glad I'm no longer running 3.7.0 in production anymore then.
16:31 srepetsk oh. i'll upgrade that one first.
16:33 woakes07004 joined #gluster
16:36 Rapture joined #gluster
16:37 kovshenin joined #gluster
16:51 calavera joined #gluster
16:54 theron joined #gluster
17:02 bennyturns joined #gluster
17:17 dijuremo @JoeJulian, doesn't he also need: server.allow-insecure on ?
17:19 shyam joined #gluster
17:23 woakes070048 joined #gluster
17:26 JoeJulian dijuremo: I'm not sure yet. I've just started trying to repro this so I can understand what changed.
17:26 JoeJulian tbh, I figured if I told him to do that much and it still didn't work, he'd complain about it. ;)
17:27 bennyturns joined #gluster
17:27 janegil joined #gluster
17:28 dijuremo Some mention of the two parameters.... https://bugzilla.redhat.co​m/show_bug.cgi?id=1057292
17:28 glusterbot Bug 1057292: high, high, 3.4.2, bugs, NEW , option rpc-auth-allow-insecure should default to "on"
17:30 janegil hi all. considering setting up two boxes with glusterfs for ovirt on centos. This would be a glusterfs with two computer and 6x2TB disks on each box. We are considering using xfs or zfs for the underlying file system. Any recommendations on this?
17:32 JoeJulian I recommend you jump right in and do it! ;)
17:32 janegil thats always the best plan
17:32 janegil :D
17:32 JoeJulian I'm still recommending xfs for production. I'm using zfs at home.
17:32 dijuremo @janegil you will need a 3rd box without bricks for quorum
17:32 JoeJulian s/need/want/
17:32 glusterbot What JoeJulian meant to say was: srepetsk: Apparently they changed 3.7.3 to use unprivileged ports. Your older versions will want to set "option rpc-auth-allow-insecure on" in /etc/glusterfs/glusterd.vol.
17:32 JoeJulian lol
17:34 janegil was planning to do two ovirt compute nodes, and two synchronized storage nodes running glusterfs, so i guess we can use either of the compute nodes for quorum.
17:34 JoeJulian yes
17:34 janegil i have been running zfs for many years at home, first freebsd then later with ZoL, has been nothing but happy with that
17:35 dijuremo @janegil, I currently have two R720 with 14 drives in raid6, really considering breaking the raid and going zfs... only concern is iops... but this is for a file server...
17:35 srepetsk JoeJulian: lol yeah, it does seem to be working with just that one config change
17:36 dijuremo @janegil also running zfs at home on a single raidz2 with 10 3TB drives....
17:36 aravindavk joined #gluster
17:37 janegil anything special that makes you want to turn dijuremo?
17:38 dijuremo @janegil zfs snapshots to allow users to auto restore....
17:39 dijuremo @janegil My file server is not yet in production... I was trying to use it as a hybrid glusterfs store for ovirt and file server at the same time...
17:39 dijuremo ovirt = RHEVM I forget we have official RH at work...
17:41 janegil dijuremo: sounds like what we are trying to do. are there any particular reasons it would not be good to run glusterfs on top on zfs other than the reasons to not run zfs in general?
17:41 janegil first timer here for both ovirt and glusterfs. trying to just avoid basic mistakes :)
17:44 theron_ joined #gluster
17:49 _maserati joined #gluster
17:49 gem joined #gluster
17:51 gem joined #gluster
17:53 gem joined #gluster
18:01 JoeJulian No strong reasons. GlusterFS is adding dedup and has erasure coding so the two main features people want zfs for are going to be integral.
18:02 _maserati JoeJulian: did you say active/active systems (over a geographic distance) was on the radar for Gluster or was it just that many people are wanting it?
18:02 _maserati active/active replication
18:02 JoeJulian It's been on the to-do list, but I don't know if it's even being actively worked on.
18:02 _maserati ok
18:03 janegil @JoeJulian: thanks, think were gonna just go for mdadm+zfs
18:04 janegil xfs that is, not zfs
18:04 janegil some buttons are too close on the keyboard
18:05 _maserati Did any of you read that CSO Blog post from Oracle yesterday/today, complaining about customers testing for vulnerabilities in their products. /chuckle
18:05 _maserati "We have very high security standards, just trust us." - developers of java
18:08 autoditac joined #gluster
18:10 haomaiw__ joined #gluster
18:11 JoeJulian hehe
18:12 JoeJulian I lose interest in an article if Oracle is mentioned.
18:12 kkeithley_ Probably the same "top people" who are working on the Arc of the Covenant
18:12 kkeithley_ s/Arc/Ark/
18:12 glusterbot What kkeithley_ meant to say was: Probably the same "top people" who are working on the Ark of the Covenant
18:16 srepetsk JoeJulian: btw yes it does seem that i needed to set server.allow-insecure as well, though ovirt is now complaining about something else
18:17 JoeJulian Cool, thanks.
18:21 calavera joined #gluster
18:21 ira JoeJulian: I don't think there's a sane production site that uses ZFS dedupe ;).  As far as EC, I don't think it does it last I knew, in any freely available version.
18:22 ira JoeJulian: Derp, Ignore me
18:22 Trefex joined #gluster
18:22 _maserati ira: of course it does EC. But dedupe... the cost to benefit is too damn much
18:23 ira _maserati: They allow 8+4 now?
18:23 _maserati ira: if i recall correctly, the last i remember is ZFS wants 1 GB of RAM per 1TB of disk to dedupe... when we're talking about damn near petabyte.... lol
18:24 ira _maserati: I'd NEVER recommend ZFS dedupe ;)
18:24 _maserati ira: Right, i was agreeing with you... no sane shop would use it lol
18:25 ira Never mind any performance issues from it.. etc ;)
18:25 _maserati Now that oracle is strangling zfs, it's hard to love anymore
18:25 _maserati RIP Sun
18:25 CyrilPeponnet if you need dedup pay for it an by a netapp array
18:26 ira CyrilPeponnet: Honestly, I used to just use ZFS compression and get about 2 to 1 back.. that was good enough ;)
18:26 ramky joined #gluster
18:27 JoeJulian The presentation I saw on gluster's dedup made me a believer. I thought it was way too expensive and made no sense in a clustered filesystem before.
18:28 ira JoeJulian: Who gave it?  I haven't caught up with it. :)
18:29 JoeJulian It was at the gluster summit....
18:29 ira Dan?
18:30 _maserati That presentation wouldn't happen to exist on the interwebs would it?
18:30 _maserati I'd love to see it
18:30 JoeJulian Dan Lambright, yes. The link to the slides is at http://www.gluster.org/community/docu​mentation/index.php/GlusterSummit2015
18:30 cliluw JoeJulian: Thanks.
18:31 _maserati wooooooo
18:32 _maserati YADL
18:33 theron joined #gluster
18:33 haomaiw__ joined #gluster
18:34 woakes070048 joined #gluster
18:35 ira Looks interesting... I'm curious how782757
18:35 _maserati ^
18:35 ira it'll fit in.
18:36 kkeithley_ IIRC much like bitrot
18:36 _maserati Maybe the blocks go into the YADL, query is made to see if that block exists anywhere and if so, make a reference pointer instead of actually storing another copy of the data
18:36 JoeJulian @tell dlambrig_ We're talking about you in #gluster
18:37 glusterbot JoeJulian: The operation succeeded.
18:38 kiwikrisp joined #gluster
18:40 papamoose left #gluster
18:43 tertiary joined #gluster
18:45 tertiary could someone help me get posix-locks to work? i modify the /etc/glusterfs/glusterd.vol file correct?
18:45 JoeJulian no
18:45 JoeJulian You install glusterfs, create a volume, start it and mount it. Done.
18:46 tertiary so it should be posix-locking already?
18:46 JoeJulian It is, yes.
18:46 JoeJulian Well, it is handling posix locking if your application is using locks.
18:47 JoeJulian If the application doesn't implement locking, there's no underlying way of forcing it.
18:47 tertiary yeah, I'm trying to write to a sqlite database on the share and I'm getting i/o errors
18:47 tertiary *concurrently
18:47 JoeJulian Yeah, they're really sloppy with their locking - last time I looked.
18:48 JoeJulian I think you can get away with it if you turn off some performance translators, but I don't recall which ones.
18:49 tertiary hmm ok
18:49 JoeJulian If it were me, I'd just turn them all off and then enable them one at a time while testing.
18:49 _maserati joined #gluster
18:49 JoeJulian Well, actually, if it were me, I'd just use mariadb. ;)
18:50 _maserati ^
18:50 _maserati I love mariadb
18:50 tertiary googling it now :)
18:51 _maserati It's the fork of mysql when oracle got their slimey greedy hands on it
18:51 dijuremo So gluster for file servers is it better to split things at a certain size?
18:51 tertiary i see. its not embedded though huh
18:51 dijuremo Say have a few 4TB volumes vs a single 12 TB one?
18:52 _maserati Our glusterfs is one 32TB volume, no problems
18:52 dijuremo What if you had to fsck it?
18:52 _maserati doesn't gluster handle that ?
18:53 dijuremo I guess you just fsck the one node when is down?
18:53 dijuremo when *gluster* is down...
18:53 dijuremo Grr still thinking drbd... I guess you are right...
18:57 dijuremo In terms of networking is it better to dedicate a couple interfaces to glusterfs? For example have two bonded for the clients (nfs/samba/etc) and two for the gluster backend?
18:57 dijuremo @_maserati you hosting samba for windows clients?
18:57 _maserati dijuremo: yes
18:58 dijuremo @_maserati Roaming profiles?
18:58 _maserati dijuremo: no. Our business app writes it's data via smb to gluster
18:58 _maserati millions of small files
18:59 _maserati dijuremo: But we're ass backwards. The developers are all firmly rooted in the windows stack and they hired a linux guy to do the backend stuff... soo.... yeah....
19:00 _maserati Them: Our software is failing at x and y, it's linux' fault! Give me linus' number! NOW!   Me: No, your code fucking sucks
19:00 dijuremo @_maserati I am still having problems with "small" files in Roaming profiles... very slow logons. But I just have not found the time to upgrade to 3.7.3 (currently at 3.6.x) for a consulting client...
19:01 _maserati dijuremo: I remember us talking about this a few weeks ago, maybe a month ago. I have no idea how to help. We're on 3.6.x as well, and I have 0 issues with access times
19:02 dlambrig_ joined #gluster
19:02 wushudoin joined #gluster
19:02 dijuremo @_maserati, I remember... I am just mentioning it. I just have to man up and do it. It sucks to have everytying in two nodes, cannot really test small things, specially when gluster is the base for the virtualized windows servers...
19:03 _maserati Yeah, I can see that being a problem, ha
19:03 dijuremo @-maserati, when sh*t hits the fan everything is down... and I am not just that familiar with ovirt yet... Have gotten a few 3-4AM nights trying to get stuff back after screwing up gluster... LOL
19:04 dijuremo @_maserati But you usually only learn breaking it, right?
19:04 _maserati Well if your vm's arent having issues running on top of gluster, make some vm's and a gluster mock setup, and test!
19:05 dijuremo @_maserati Same two nodes have 3 gluster volumes, one for self-hosted engine, one for file server, one for vm hosting... so if I mess up with gluster I mess with everything...
19:05 _maserati assuming there's a level of write caching and what not with whatever hypervisor you're using, you should be able to test
19:05 _maserati Buy two cheap x86 systems off ebay, set up gluster, test!
19:06 _maserati hell I have about 20 dell servers at my house, pay for shipping and i'll mail you two
19:06 dijuremo We did not really find the Roaming profile issues during test.. :(
19:06 dijuremo I have other machines, it is just not the same scenario...
19:07 tertiary with all the performance traslators removed I get the same issue, disk i/o error with concurrent sqlite writes. Just to clarify @JoeJulian, I edited the fuse volume file... right?
19:07 _maserati Well your workload might be rather intense for just 3 nodes?
19:07 _maserati err 2 nodes
19:07 dijuremo I have looked at iotop and iostat... not really intensive...
19:08 _maserati hmmm
19:08 dijuremo And we have good raid cards...
19:08 _maserati network isnt hammered either?
19:08 dijuremo Remember that if I ran the ls -lR or find on the brick itself it was a lot faster...
19:08 _maserati oh yeah right
19:08 dijuremo Nope... have 10Gbps cards and not saturated...
19:09 dijuremo Servers are in separate server rooms but switches have 20gpbs connections between them too... so not a network issue..
19:09 dijuremo Will have to really do the 3.7.x upgrade since 3.7.x has addressed some of the performance issues with small files from what I have read...
19:09 _maserati and the nodes are distributed or replicated?
19:10 dijuremo Just replicated
19:10 dijuremo One brick each per gluster volume...
19:10 _maserati then yeah, reads shouldnt have an issue
19:10 dijuremo Also the hosted engine and vm storage are on their own areca controller on SSDs.... the file server files are on 12 2TB drives on their own areca controller
19:12 _maserati You've got me beat, that configuration sounds rather generous than anything else.
19:12 dijuremo Exactly... It worked very well and was very effienct with btrfs on drbd till btrfs bit the dust :(
19:13 _maserati What happened with btrfs
19:13 dijuremo I was using the old code in RHEL6.x and it bombed...
19:13 dijuremo It will kernel panic on writes...
19:13 dijuremo Worked fine for about 3 years...
19:13 _maserati oh wow
19:14 lanning joined #gluster
19:14 dijuremo It was really fast to begin with, much faster than ext4 and I wanted a single replicated drbd of 20TB to avoid complexity issues... so I went btrfs
19:14 dijuremo ext4 had that 16GB limit...
19:15 dijuremo But so far very happy with gluster except for the slowness in roaming profiles...
19:20 dijuremo I meant 16TB not GB limit on ext4...
19:24 PaulCuzner joined #gluster
19:37 kiwikrisp Quck question, I run xenserver 6.2 and have gluster (ver 3.4) storage w/ NFS. Anybody have thoughts about upgrading to 3.6? I know that 3.4 is no longer supported and would like to get moved up during my next maintenance window. Tried moving to 3.5 and it killed the connection w/ xenserver previously. Don't want a repeat.
19:42 bennyturns joined #gluster
19:45 kovshenin joined #gluster
19:55 rwheeler joined #gluster
19:57 calavera joined #gluster
20:11 unclemarc joined #gluster
20:14 cliluw joined #gluster
20:24 Twistedgrim joined #gluster
20:25 Twistedgrim1 joined #gluster
20:47 Twistedgrim joined #gluster
20:51 JoeJulian kiwikrisp: Not sure why it would have had a problem with 3.5 even. Should work.
20:52 kiwikrisp JoeJulian: NFS server kept crashing, should I stay away from 3.7? I see they have an upgrade path to 3.7 directly from 3.4.
20:53 kiwikrisp should qualify, the gluster nfs server kept crashing.
20:55 JoeJulian I thing 3.7.3 is pretty acceptable for production at this point.
20:55 JoeJulian s/thing/think/
20:55 glusterbot What JoeJulian meant to say was: I think 3.7.3 is pretty acceptable for production at this point.
21:02 PaulePanter Does somebody use Ansible to deploy/configure Gluster servers?
21:02 PaulePanter *somebody in here
21:04 kiwikrisp OK, we'll jump there then. Another question. I found the documentation somewhat confusing. Does 3.7 have HA NFS built in? If so where are the instructions to configure it? I tried to set it up on a test box and flopped. Couldn't find the command lines nor clear requirements.
21:05 _maserati I hate you glusterbot.
21:05 _maserati s/hate/love
21:05 _maserati aww :(
21:07 PaulePanter I hate you _maserati.
21:08 PaulePanter s/hate/love/
21:08 glusterbot What PaulePanter meant to say was: I love you _maserati.
21:08 PaulePanter There you go.
21:08 badone_ joined #gluster
21:08 _maserati I love you PaulePanter.
21:08 _maserati s/love/loooveeee/
21:08 glusterbot What _maserati meant to say was: I loooveeee you PaulePanter.
21:08 _maserati :D
21:08 PaulePanter ;-)
21:09 _maserati can it do multiple words at once?
21:09 PaulePanter No idea.
21:09 * PaulePanter still hasn’t figured out the authentication stuff. That has to wait to tomorrow.
21:09 _maserati s/can/it/it/can/
21:09 _maserati damn
21:09 glusterbot _maserati: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
21:10 _maserati its okay glusterbot. i still lub you
21:13 calavera joined #gluster
21:16 victori joined #gluster
21:26 JoeJulian looks like _maserati need to go hang out in #regex for a while... ;)
21:28 _maserati you will rue the day... that is today... because you said I are dumb
21:28 JoeJulian kiwikrisp: no, you were reading about the nfs-ganesha integration. If there is usable documentation for that, I don't know where to find it.
21:29 JoeJulian pfft... I made no such claim. Just implied you might need a regex refresher.
21:29 _maserati jokesss... i just never wanted to learn regex cause it's scary looking
21:31 kiwikrisp OK, that's really the last piece that I need to make gluster my defacto storage for xenserver. Love it, but the ucarp has failed me more than once when the gluster nfs service has died and hung my vm's.
21:33 JoeJulian I just exported a volume with ganesha without pnfs, but I had to do it manually. I wasn't going to install corasync/pacemaker just for my little home network.
21:38 kiwikrisp So, is the configuration different to get the nfs server running on 3.7 than it was on 3.4? where I just set the properties for xenserver to be able to see it?
21:43 JoeJulian No, you can still do that. if you want pnfs though, or nfs4, or udp - you /can/ use ganesha.
21:44 _maserati one regex tutorial in and i want to kill myself. I don't know why I just have a mental block with it. I'd much rather write in pure x86 assembly then learn this -.-
21:44 JoeJulian hehe
21:44 JoeJulian I couldn't live without being able to grep and sed.
21:45 _maserati well with pipes you don't need anything but grep :D
21:46 _maserati well i guess i always do google sed stuff.. i really do need to just learn regex.
21:46 theron_ joined #gluster
21:47 tg2 is there a way to check, from a client - which bricks it can cnonect to?
21:47 tg2 I have an issue where random clients can't see a particular brick
21:47 tg2 as ar esult they see a smaller storage pool
21:47 tg2 i want to see which server:/brick is not connected
21:47 tg2 that should be
21:48 autoditac joined #gluster
21:58 bennyturns joined #gluster
22:19 autoditac joined #gluster
22:22 ninkotech__ joined #gluster
22:23 mikemol joined #gluster
22:33 JoeJulian tg2: netstat. Use gluster volume status to get the ports assigned to each brick. You can even nc to those ports to see if a tcp connection can be established.
22:36 tg2 JoeJulian - on the client?
22:36 tg2 thought gluster was only on the storage
22:36 tg2 s/storage/server
22:37 tg2 I only have glusterfs on the clients
22:37 JoeJulian Right, you'd have to run the gluster command on the server to see what ports you're looking for.
22:38 nage joined #gluster
22:38 tg2 ok
22:38 tg2 i noticed only one server does this
22:38 tg2 one of it's bricks will drop out
22:38 tg2 some clients won't see that brick
22:38 tg2 others will
22:38 JoeJulian Then netstat -tnp | grep gluster to show what tcp connections are open.
22:38 tg2 that I see
22:39 tg2 but from the client
22:39 tg2 it is hard to see which brick it can't connect to
22:39 tg2 it just shows that it has %bricksize% less space in the volume than the other clients
22:39 tg2 restarting glusterd on the server doesn't seem to fix it - have to do a full system restart
22:39 tg2 running 3.7.2 globally so shouldn't be any version issues - unless there is something in 3.7.2 that was patched
22:39 tg2 3.7.1 had an issue where the client would just segfault and disconnect
22:41 JoeJulian netstat -tnp | awk '/gluster/{print $5}'
22:41 JoeJulian 192.168.2.3:49163
22:41 JoeJulian 192.168.2.30:24007
22:41 tg2 yeah i can see that from the server
22:41 tg2 but then I have to NC into each server/brick to see which won't connect
22:41 JoeJulian That's the client.
22:42 JoeJulian The attempts to reconnect are probably spamming the client log, too.
22:42 tg2 ah
22:43 tg2 it will never reconnect
22:43 tg2 until I restart the offending server
22:43 tg2 rather, the server with the offending brick
22:43 tg2 but it's always a juggle to see which client is having issues connecting to which brick
22:43 JoeJulian So volume status doesn't show a port, I bet.
22:43 tg2 restarting glusterd doesn't work
22:43 tg2 no it does still on the offending server
22:43 tg2 since other clients are connected to it fine
22:43 tg2 its really a hash(client->server:/brick) issue
22:44 tg2 it will connect to one brick but not another
22:44 tg2 on the same server
22:44 JoeJulian Odd
22:44 tg2 ya
22:45 JoeJulian Check the brick logs.
22:45 JoeJulian The only guess I have is that the client is running out of privileged ports and is trying to connect with an unprivileged one.
22:46 tg2 this is all local network with no firewall
22:46 tg2 let me check ephemeral port range ont he server
22:46 tg2 but nothing is pointing to that - other clietns are fine
22:47 tg2 also this server only has 2 bricks, others have 4-5 running the same config
22:47 JoeJulian Always just the one server that's giving you fits?
22:47 tg2 usually it's the offending server
22:47 tg2 but it can be one brick or another
22:48 tg2 a few times it's happened where it was another server/client
22:48 tg2 but it's really random which client drops a brick
22:48 tg2 and I have 30+ clients
22:48 tg2 and 18 bricks
22:48 tg2 I only see one established connection per brick per client
22:49 JoeJulian Well, I'd try "gluster volume set $volname server.allow-insecure on"
22:49 JoeJulian But you can find out for sure if that's it from the brick log.
22:49 tg2 auth.allow: 192.168.0.*
22:50 JoeJulian irrelevant.
22:50 dingdong joined #gluster
22:50 dingdong hey all
22:50 tg2 ok let me set that
22:51 tg2 but i see only a handful of ports being used - nowhere near the ephemeral limit
22:51 tg2 and a tuple is (iirc) remotehost+remoteport+localhost+localport
22:52 dingdong as a newbie to glusterfs I am currently using it as webserver. However the du command takes a long time before it shows the folder size any idea to speed this up?
22:52 tg2 dingdong - it won't be as fast as local storage
22:52 JoeJulian delete files? ;)
22:52 tg2 don't expect it to be
22:52 _maserati rm -rf /mnt/glusterfs/*
22:53 tg2 try to do things in parallel instead of in series
22:53 dingdong hmm the stupid script runs ever hour to calculate the user space usage
22:53 tg2 such as du (which operates in series)
22:53 tg2 you might be able to leverage quotas
22:53 tg2 to do what you're trying
22:53 dingdong its about 25gb +
22:53 tg2 when you have a file system that is moutned by many clients
22:54 tg2 any one of those clients can change it
22:54 tg2 without the other "knowing"
22:54 tg2 so you can't assume that a file that was just read is still there
22:54 dingdong yeah true
22:54 nsoffer joined #gluster
22:54 tg2 inherently this means you have to go back to the brick server and ask it what size that file is
22:54 tg2 and even if you ask it again 2 seconds later
22:54 tg2 it has to check again
22:54 tg2 where as on local disk - the kernel would know if you changed a file
22:55 dingdong caching would be nice (never in a need to get the real time data)
22:55 dingdong @tg2 your are verry RIGHT!
22:55 dingdong never thought like that
22:55 JoeJulian And, assuming it's replicated, it needs to check to make sure that one of the bricks doesn't report that the other brick is out of date.
22:55 tg2 yeah
22:56 tg2 then you have to allow for the fuse hamsters to run in their wheel for a few ms per request
22:56 JoeJulian I like to say, don't compare apples to orchards.
22:56 tg2 if you were to build a script however
22:56 tg2 that gets dirlist
22:56 tg2 and crawls it in paralllel
22:56 tg2 (node, ahem)
22:56 tg2 and checks each file individually and sums it all up
22:56 tg2 then it would probably run faster
22:56 tg2 than du
22:56 JoeJulian +1
22:57 tg2 it might also wreak havoc on your glusterd processes
22:57 tg2 ;)
22:57 tg2 I think there were some patches to 3.7 that sped this up
22:57 tg2 for directory listing anywayy
22:57 dingdong -1 for me, I started with nebieeE! :P
22:57 tg2 which IMO should be similar logic
22:57 JoeJulian Make your application update a database when files are uploaded or deleted.
22:57 tg2 yep
22:58 tg2 note that even "ls" can be slow
22:58 tg2 for directories with a lot of files
22:58 JoeJulian If you stored your quota in a database, you would only crawl to audit the database.
22:58 tg2 you can also just use glusterfs directory quota
22:58 tg2 and query that
22:58 dingdong ls is fast in my case
22:58 tg2 since I iirc it will keep tabs on that
22:58 tg2 when files are changed
22:58 gildub joined #gluster
22:58 tg2 sort of a running tally
22:59 tg2 (joe might know better - kk would definitely)
22:59 JoeJulian But that would only be useful to you, of course, if the user's files were owned by the user.
22:59 dingdong they are
22:59 JoeJulian Oh, well there you go. Use quota.
22:59 tg2 if each user has their own directory (SaaS?)
22:59 tg2 then youc an use quota
23:00 tg2 and have added security against somebody uploading a zip bomb
23:00 dingdong using zpanel on a debian pretty nasty
23:01 dingdong http://pastebin.com/B5isUURH
23:01 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
23:02 JoeJulian Hmm, I should update that... I've changed my preferences...
23:02 JoeJulian @paste
23:02 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
23:02 dingdong this is the current script which is building data and calculating Disk usage for user
23:05 dingdong I know this is off topic but is there a way to create a softlink which is sticky bit?
23:05 dingdong I don't want to sync all the subdirs
23:06 tg2 haha good luck with that
23:06 tg2 .*admin.*
23:06 tg2 unsupported
23:06 tg2 :D
23:06 dingdong currently using softlinks to exclude directories from sync
23:06 tg2 people still use cpanel?
23:06 tg2 and directadmin
23:06 dingdong nope Zpanel
23:06 tg2 thought they had cloud for that
23:06 dingdong whaha sentora
23:06 tg2 so zpanel is like cpanel but with even less support
23:07 dingdong yes!
23:07 tg2 sounds like a connundrum
23:07 dingdong hack proff
23:07 dingdong proof
23:07 tg2 definitely
23:07 tg2 just like cpanel
23:07 tg2 never ever gets compromised
23:09 dingdong @tg2 are you manually updating your virtualdirs in apache then?
23:09 tg2 i use nginx
23:09 dingdong hmm caching
23:09 tg2 i give my clients containers or vms
23:10 JoeJulian I give my clients cloud blocks.
23:10 dingdong cloud blocks?
23:10 dingdong just a space?
23:10 JoeJulian Chunks of privately owned cloud.
23:11 dingdong took 1.5 hours to calculate 15gb omg..!
23:13 dingdong I should have stayed with rsync
23:13 tg2 how many files
23:13 tg2 JoeJulian - been using smartdatacenter
23:13 tg2 loving it so far
23:13 tg2 vs openstack
23:13 tg2 and vs vmware
23:14 dingdong more then 20 joomla sites with a lot of modules
23:14 tg2 yeah all of those things - joomla, zpanel
23:14 tg2 those sound like things that are best suited to individual containers or vms
23:14 tg2 not multitennancy
23:14 tg2 either way
23:14 tg2 gluster is not a good use case for hosting small web scripts and files
23:15 tg2 you might find it is faster if you mount it via nfs
23:15 tg2 instead of using the fuse mount
23:15 tg2 you will lose some failover and load balancing magic of gluster
23:15 tg2 but in your case
23:15 tg2 might be worth trying
23:16 tg2 so, JoeJulian - what odes the allowinsecure solve
23:16 dingdong 490134 files
23:17 dingdong sorry took some time :P
23:17 tg2 well
23:17 tg2 that's reasonable
23:18 tg2 136 files per second
23:18 tg2 nb
23:18 nsoffer joined #gluster
23:18 dingdong 500mbit up and down
23:19 tg2 yeah but what disks are behind it
23:19 dingdong damm your right
23:19 dingdong westend
23:19 dingdong 7200rpm
23:19 tg2 136 io/s seems awefully close to a harddrive
23:19 tg2 of the sata variety
23:19 tg2 anyway, if you mount it with nfs
23:19 tg2 isntead of gluster
23:19 tg2 you can leverage some OS level caching
23:20 tg2 but
23:20 tg2 you might find a file that SHOULD exist
23:20 tg2 doesn't show in a directory listing
23:20 tg2 which can cause some issues with consistancy
23:20 tg2 but in general
23:20 tg2 if you request a file
23:20 tg2 and it's on the backend
23:20 tg2 it'll work
23:20 tg2 now if you go and change that file
23:20 tg2 the client might not see the change
23:20 tg2 or have a broken inode
23:21 dingdong thats the whole reasson I am trying the glusterfs
23:22 dingdong because a change in the  admin env would cause a broke link/image in the visitors side
23:23 tg2 if you were using rsync before
23:23 tg2 nfs would be better than that
23:23 JoeJulian or make a second nfs mount just for calculating the du.
23:24 dingdong whaha
23:24 dingdong performans would not be faster because of that right?
23:24 tg2 well
23:24 tg2 :\
23:24 tg2 probably terrible but yeah you could do that
23:24 tg2 most of these files are images and css/js I bet
23:24 tg2 and php includes
23:25 tg2 lol
23:25 tg2 perfect gluster use case right JoeJulian
23:25 dingdong yep +1
23:25 JoeJulian @php
23:25 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
23:25 glusterbot JoeJulian: --fopen-keep-cache
23:25 tg2 https://joejulian.name/blog/optimiz​ing-web-performance-with-glusterfs/
23:25 glusterbot Title: Optimizing web performance with GlusterFS (at joejulian.name)
23:25 JoeJulian You could try that stuff too.
23:25 tg2 some nobody wrote that
23:25 JoeJulian Just some f'ing user.
23:25 tg2 dude probably just installed it one night
23:25 tg2 and wrote this
23:26 tg2 also equally irrelevant
23:26 tg2 https://joejulian.name/blog/one-more-reason-that-​glusterfs-should-not-be-used-as-a-saas-offering/
23:26 glusterbot Title: One more reason that GlusterFS should not be used as a SaaS offering (at joejulian.name)
23:26 JoeJulian I was a little frustrated when I wrote that.
23:27 tg2 anyway
23:27 tg2 ceph might work better for this particular use case
23:27 tg2 but gluster with nfs should do what you want
23:27 tg2 you're not doing many servers and many mounted clients doing many reads
23:28 tg2 if you're worried about failover you can always put keepalived on all the hosts
23:28 tg2 and connect your nfs to that
23:30 dingdong will setup a oa env to see how that works out for me
23:30 dingdong but gluster with nfs should do what you want ?
23:30 dingdong combining gluster with nfs?
23:30 shyam joined #gluster
23:31 sadbox joined #gluster
23:39 JoeJulian ceph? nope.
23:39 JoeJulian cephfs is still not production ready.
23:39 JoeJulian And rbd is significantly slower for vm hosting.
23:47 autoditac_ joined #gluster
23:51 calavera joined #gluster
23:55 dewey joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary