Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 Mr_Psmith joined #gluster
00:24 xMopxShell joined #gluster
00:26 xMopxShell i just set up gluster for the first time. holy crap, it's cool.
00:26 xMopxShell you probably hear this a lot, but how does it faire in terms of stability?
00:32 bennyturns joined #gluster
00:32 bennyturns joined #gluster
00:38 _Bryan_ joined #gluster
00:49 VeggieMeat joined #gluster
00:50 siel joined #gluster
00:50 siel joined #gluster
00:51 Peppard joined #gluster
00:53 sankarshan_away joined #gluster
00:54 al joined #gluster
00:54 anoopcs joined #gluster
00:56 zhangjn joined #gluster
00:59 jermudgeon joined #gluster
00:59 xMopxShell one more question, how much memory does gluster need? best i can find is "more than 1GB"
01:01 virusuy joined #gluster
01:04 nishanth joined #gluster
01:06 zhangjn joined #gluster
01:07 nangthang joined #gluster
01:11 zhangjn_ joined #gluster
01:13 JoeJulian xMopxShell: I like the stability. I've been using it for over 6 years. For fault tolerance and stability combined, I think it's the best.
01:17 Mr_Psmith xMopxShell:  I am also interested to know what ou learn about memory requirements
01:18 JoeJulian By default it uses (iirc) half of the available memory. You can tune it though. I had 4gb servers running 16 bricks each.
01:20 Mr_Psmith four gb? That’s incredible/ly low!
01:37 haomaiwa_ joined #gluster
01:37 Lee1092 joined #gluster
01:59 rafi joined #gluster
02:01 haomaiwa_ joined #gluster
02:14 nbalacha joined #gluster
02:26 harish_ joined #gluster
02:29 bharata joined #gluster
02:31 nangthang joined #gluster
02:41 kotreshhr joined #gluster
02:41 kotreshhr left #gluster
02:46 zhangjn joined #gluster
02:56 haomaiwang joined #gluster
03:01 haomaiwang joined #gluster
03:09 zhangjn_ joined #gluster
03:12 TheSeven joined #gluster
03:12 chirino_m joined #gluster
03:16 zhangjn joined #gluster
03:28 ppai joined #gluster
03:32 vmallika joined #gluster
03:32 kdhananjay joined #gluster
03:34 shubhendu joined #gluster
03:35 gem joined #gluster
03:36 zhangjn_ joined #gluster
03:36 overclk joined #gluster
03:41 bennyturns joined #gluster
03:42 rafi joined #gluster
03:42 skoduri joined #gluster
03:47 sakshi joined #gluster
03:48 atinm joined #gluster
03:55 nbalacha joined #gluster
03:55 nishanth joined #gluster
04:01 64MADSR7C joined #gluster
04:08 devilspgd joined #gluster
04:10 deepakcs joined #gluster
04:18 hchiramm_home joined #gluster
04:22 calavera joined #gluster
04:23 gildub joined #gluster
04:29 yazhini joined #gluster
04:30 kanagaraj joined #gluster
04:44 amye joined #gluster
04:49 baojg joined #gluster
04:52 rafi joined #gluster
04:52 overclk joined #gluster
04:53 kshlm joined #gluster
04:53 raghu joined #gluster
04:57 pppp joined #gluster
04:59 rp_ joined #gluster
04:59 ndarshan joined #gluster
05:00 skoduri joined #gluster
05:01 haomaiwang joined #gluster
05:01 harish_ joined #gluster
05:02 kdhananjay joined #gluster
05:02 beeradb joined #gluster
05:04 calavera joined #gluster
05:05 vimal joined #gluster
05:11 raghu joined #gluster
05:16 Bhaskarakiran joined #gluster
05:18 kotreshhr joined #gluster
05:19 hgowtham joined #gluster
05:20 cliluw joined #gluster
05:26 pppp joined #gluster
05:32 rp_ joined #gluster
05:36 overclk_ joined #gluster
05:38 Manikandan joined #gluster
05:41 neha joined #gluster
05:41 anil joined #gluster
05:43 vmallika joined #gluster
05:52 nishanth joined #gluster
05:53 aravindavk joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 spalai joined #gluster
06:03 spalai left #gluster
06:05 hagarth joined #gluster
06:06 itisravi joined #gluster
06:07 nbalacha joined #gluster
06:08 hagarth joined #gluster
06:08 raghu` joined #gluster
06:09 jiffin joined #gluster
06:13 PatNarciso joined #gluster
06:14 ashiq joined #gluster
06:18 mhulsman joined #gluster
06:20 baojg joined #gluster
06:20 kdhananjay joined #gluster
06:23 jtux joined #gluster
06:26 hagarth joined #gluster
06:26 jwd joined #gluster
06:26 EinstCrazy joined #gluster
06:27 mhulsman joined #gluster
06:29 shubhendu joined #gluster
06:34 jtux joined #gluster
06:39 DV__ joined #gluster
06:42 guntha joined #gluster
06:44 harish_ joined #gluster
06:48 nangthang joined #gluster
06:51 PatNarciso joined #gluster
06:56 jcastill1 joined #gluster
07:01 haomaiwa_ joined #gluster
07:01 jcastillo joined #gluster
07:02 ramky joined #gluster
07:03 spalai1 joined #gluster
07:05 [Enrico] joined #gluster
07:09 Philambdo joined #gluster
07:10 [Enrico] joined #gluster
07:16 Bhaskarakiran joined #gluster
07:21 kdhananjay joined #gluster
07:24 maveric_amitc_ joined #gluster
07:27 fsimonce joined #gluster
07:34 DV joined #gluster
07:40 spalai joined #gluster
07:47 jcastill1 joined #gluster
07:52 jcastillo joined #gluster
07:57 baojg joined #gluster
08:01 haomaiwa_ joined #gluster
08:04 [Enrico] joined #gluster
08:21 hflai joined #gluster
08:22 kdhananjay joined #gluster
08:26 kshlm joined #gluster
08:26 Slashman joined #gluster
08:38 s19n joined #gluster
08:41 Bhaskarakiran joined #gluster
08:45 LebedevRI joined #gluster
08:49 Philambdo joined #gluster
08:49 overclk joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 aravindavk joined #gluster
09:16 atalur joined #gluster
09:20 Lee1092 joined #gluster
09:20 hagarth joined #gluster
09:21 anti[Enrico] joined #gluster
09:21 kotreshhr joined #gluster
09:29 overclk joined #gluster
09:30 poornimag joined #gluster
09:33 Bhaskarakiran joined #gluster
09:34 overclk joined #gluster
09:37 rafi joined #gluster
09:38 Apeksha joined #gluster
09:45 Slashman joined #gluster
09:50 overclk joined #gluster
09:53 hchiramm_home joined #gluster
09:53 jvn joined #gluster
09:58 Philambdo joined #gluster
10:01 haomaiwa_ joined #gluster
10:18 aravindavk joined #gluster
10:24 DV__ joined #gluster
10:29 kotreshhr joined #gluster
10:32 badone_ joined #gluster
10:32 nbalacha joined #gluster
10:32 skoduri joined #gluster
10:39 ashiq joined #gluster
10:40 poornimag joined #gluster
10:45 rafi joined #gluster
10:46 overclk joined #gluster
10:47 rafi1 joined #gluster
10:47 Bhaskarakiran joined #gluster
10:48 amye1 joined #gluster
10:49 badone_ joined #gluster
10:50 rafi joined #gluster
10:53 firemanxbr joined #gluster
10:59 Bonaparte_alt joined #gluster
10:59 gildub joined #gluster
11:00 Bhaskarakiran joined #gluster
11:01 haomaiwa_ joined #gluster
11:09 EinstCrazy joined #gluster
11:11 bharata-rao joined #gluster
11:13 Manikandan joined #gluster
11:16 Bonaparte_alt joined #gluster
11:22 DV joined #gluster
11:24 SadAdmin joined #gluster
11:24 SadAdmin Hi guys :)
11:26 SadAdmin Can I have an question 'bout gluster? I have got some problems with NFS connection, I spend hours and hours by googling but nothing found and now Iam stuck and dont know, what to do :(
11:26 jrm16020 joined #gluster
11:29 patryck SadAdmin: now that you have everyone's understanding, this might be the right time to formulate your problem instead of waiting for approval ;)
11:31 SadAdmin yes, but I dont know, if this channel is for support and I dont want to upset users here
11:32 w00_ joined #gluster
11:32 SadAdmin so ok, I have got 2 servers (gluster storages) and 1 volume in those servers (servers are connected via 1 GB network). Volume created via replica, everything is fine, but now I cannot mount this volume to my client (3rd server). I am using this command: "mount -v -o mountproto=tcp,nfsvers=3 -t nfs host:/gv0 /mnt"
11:33 SadAdmin and this is the server (storage) reply: mount.nfs: trying text-based options 'mountproto=tcp,nfsvers=3,addr=host,mountaddr=host'
11:33 SadAdmin mount.nfs: prog 100003, trying vers=3, prot=6
11:33 SadAdmin mount.nfs: trying host prog 100003 vers 3 prot TCP port 2049
11:33 SadAdmin mount.nfs: portmap query failed: RPC: Remote system error - Connection refused
11:34 SadAdmin volume in storage server is in /data/brick1/gv0
11:34 SadAdmin in storage server is running proccess: glusterd and portmap (rpcbind), nfs process is now stopped
11:35 w00_ firewall or portmap not started... ?
11:35 SadAdmin firewalld is stopped, rpcbind is running
11:45 w00_ Anyone knows how to make gluster play nice with lxd/lxc? i get volume create fail error like "Setting extended attributes failed, reason: Operation not permitted"
11:48 ndevos SadAdmin: make sure all nfs services have been stopped, restart rpcbind and then restart glusterd
11:48 ndevos SadAdmin: on a storage server, you should not be running any nfs services (except rpcbind), and not mount anything over nfs
11:49 SadAdmin selinux disabled
11:49 SadAdmin problem solved
11:49 SadAdmin OH MY FCKING GOD
11:50 w00_ heh
11:50 lalatenduM joined #gluster
11:50 hchiramm gluster community meeting starts in another 15 mins, please join #gluster-meeting channel in freenode
11:50 SadAdmin in storage server running: nfs, rpcbind, glusterd
11:51 Mr_Psmith joined #gluster
11:51 ndevos SadAdmin: nfs services from your distribution will most likely conflict with the nfs services from Gluster
11:54 hchiramm joined #gluster
11:55 SadAdmin storage nfs process stopped, and all working too :)
11:56 SadAdmin so the SELinux been the problem
11:56 ndevos SadAdmin: what distribution are you using?
11:59 * Romeor is still waiting for free hugs
12:00 hagarth Romeor: hop on to #gluster-meeting for that :)
12:00 amye1 Free hugs? ++
12:01 haomaiwang joined #gluster
12:02 overclk joined #gluster
12:03 rjoseph joined #gluster
12:03 jdarcy joined #gluster
12:03 SadAdmin ndevos = CentOS
12:06 cabillman joined #gluster
12:07 Bhaskarakiran joined #gluster
12:09 pdrakeweb joined #gluster
12:10 Mr_Psmith joined #gluster
12:13 jtux joined #gluster
12:22 ppai joined #gluster
12:24 Romeor ndevos = human
12:27 bennyturns joined #gluster
12:35 B21956 joined #gluster
12:35 overclk joined #gluster
12:36 rafi joined #gluster
12:44 jdarcy joined #gluster
12:48 w00_ lol
12:50 jcastill1 joined #gluster
12:59 w00_ left #gluster
13:01 spalai left #gluster
13:02 anil joined #gluster
13:04 kotreshhr left #gluster
13:05 shyam joined #gluster
13:07 jcastillo joined #gluster
13:09 mhulsman joined #gluster
13:10 mhulsman joined #gluster
13:19 julim joined #gluster
13:19 amye1 left #gluster
13:21 amye joined #gluster
13:21 chirino joined #gluster
13:25 overclk joined #gluster
13:28 haomaiwa_ joined #gluster
13:29 _Bryan_ joined #gluster
13:32 overclk_ joined #gluster
13:35 w00_ joined #gluster
13:35 jobewan joined #gluster
13:41 klaxa joined #gluster
13:46 * ndevos = himself
13:50 jvn left #gluster
13:51 rafi joined #gluster
13:53 neofob joined #gluster
13:53 hchiramm_home joined #gluster
13:54 dgandhi joined #gluster
14:01 hagarth joined #gluster
14:01 haomaiwa_ joined #gluster
14:05 owlbot` joined #gluster
14:08 Mr_Psmith joined #gluster
14:10 klaxa|work joined #gluster
14:14 calavera joined #gluster
14:16 mhulsman joined #gluster
14:17 overclk joined #gluster
14:23 _Bryan_ joined #gluster
14:26 calavera joined #gluster
14:29 social joined #gluster
14:31 papamoose1 left #gluster
14:36 overclk joined #gluster
14:43 rwheeler joined #gluster
14:48 spcmastertim joined #gluster
14:57 cholcombe joined #gluster
14:59 rafi joined #gluster
15:01 haomaiwa_ joined #gluster
15:12 overclk joined #gluster
15:14 _maserati joined #gluster
15:20 wushudoin joined #gluster
15:27 ayma_ joined #gluster
15:29 ayma_ hi I'm trying to run "gluster nfs-ganesha enable" but am running into errors.  Error is "nfs-ganesha: failed: Commit failed on localhost."
15:30 ayma_ when looking at the logs it seems like it wants to add a dir, nfs-ganesha to  /var/run/gluster/shared_storage.  Wondering what was supposed to create the var/run/gluster/shared_storage
15:30 haomaiwa_ joined #gluster
15:31 ayma_ think the issue might be " 0-management: Commit of operation 'Volume (null)' failed on localhost", wondering how to fix the issue
15:33 aravindavk joined #gluster
15:34 ayma_ if I manually create the dir /var/run/gluster/shared_storage/ it seems like the next error is "Commit failed on <hostname of gluster peer>"
15:36 Gill joined #gluster
15:36 Manikandan joined #gluster
15:44 hchiramm_home joined #gluster
15:45 ayma_ if i don't want a ha configuration yet, but I do want to use nfs-ganesha, then should I be running gluster nfs-ganesha enable?
15:50 _Bryan_ joined #gluster
15:51 magamo ayma_: I believe what you are looking to run is: gluster volume set all cluster.shared_storage_enable on
15:51 magamo Then the rest of your steps should work (This command creates and mounts the shared storage volume)
15:56 ayma_ magamo:  thanks for the suggestion, however I got the following error "volume set: failed: option : cluster.shared_storage_enable does not exist"
15:57 ayma_ the cmd suggested "Did you mean cluster.dht-xattr-name or ...enable-shared-storage?"
15:58 ayma_ but i don't see those options when i run "gluster help"
15:58 magamo ayma_: I think the option I cited was brought in with gluster 3.7.X
15:59 magamo I don't have documentation on hand on how to create/enable it on older versions.
15:59 DV joined #gluster
16:00 magamo And yes, try it as 'enable-shared-storage.' I'm constantly transposing underscores and hyphens.
16:00 ayma_ [root@qint-tor01-c6 ~]# gluster --version glusterfs 3.7.4 built on Sep  1 2015 15:55:03 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@qint-tor01-c6 ~]#
16:00 glusterbot Title: Technologies | Red Hat (at www.gluster.com)
16:01 magamo They may have changed the option name in 3.7.4 for all I know.  I last did it in 3.7.3.
16:01 ayma_ okay, i only have one volume
16:01 haomaiwa_ joined #gluster
16:01 ayma_ error i get with enable-shared-storage says "volume set: failed: Not a valid option for single volume"
16:01 magamo Set it for 'all' anyway, it's a global option, not a volume option.
16:03 ayma_ got it thanks
16:03 ayma_ gluster volume set all enable-shared-storage enable
16:07 bennyturns joined #gluster
16:16 overclk joined #gluster
16:17 shubhendu joined #gluster
16:27 mikemol joined #gluster
16:29 jobewan joined #gluster
16:46 JoeJulian @lucky gluster nfs-ganesha shared_storage
16:46 glusterbot JoeJulian: http://www.gluster.org/pipermail/gluster-users.old/2015-June/022432.html
16:47 JoeJulian Nope, not lucky.
16:56 _maserati Would anyone mind giving this a glance and tell me if it's safe? I need to add a new gluster server to an existing cluster to test response time over a 300 mile geographic difference
16:56 _maserati http://avid.force.com/pkb/articles/en_US/how_to/Adding-a-node-to-a-cluster-using-gluster
16:56 glusterbot Title: Adding a node to a cluster using glusterAvid Support (at avid.force.com)
16:56 JoeJulian ayma_, magamo: I think it's https://github.com/gluster/glusterdocs/blob/4cec5b6fa529da5e1e054add72235456e91b8888/Administrator%20Guide/Distributed%20Geo%20Replication.md#configuring-meta-volume
16:56 glusterbot Title: glusterdocs/Distributed Geo Replication.md at 4cec5b6fa529da5e1e054add72235456e91b8888 · gluster/glusterdocs · GitHub (at github.com)
16:57 Gill joined #gluster
16:57 _maserati whoa wrong link
16:57 _maserati dont look at that one
16:57 JoeJulian I'm very displeased at the state of documentation for nfs-ganesha.
16:57 JoeJulian ndevos: ^
16:58 _maserati omg i lost the link, back to google!
16:58 JoeJulian _maserati: What's ping time?
16:58 _maserati 12ms
16:58 _maserati i know it'll be a little sluggish
16:58 JoeJulian So expect 36.
16:58 _maserati I need to set it up in our dev environment and see if our app can deal with it
16:59 _maserati i FINALLY got the okay to do this test
16:59 JoeJulian Hehe
17:00 JoeJulian Yeah, just add it with an increased replica count (assuming that's your goal). When you're ready to remove it, just decrease the replica count during the remove-brick.
17:00 _maserati i mean is it as easy as installing gluster server on centos 7, peer probe from the existing gluster and adding the replicated brick ?
17:00 JoeJulian yep
17:00 _maserati sweet
17:01 ramky joined #gluster
17:01 haomaiwa_ joined #gluster
17:01 _maserati lol look at my new link: https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
17:01 glusterbot Title: How to expand GlusterFS replicated clusters by one server (at joejulian.name)
17:01 _maserati ;)
17:02 JoeJulian :)
17:02 wushudoin| joined #gluster
17:02 JoeJulian That was more for people that want to maintain current replica counts.
17:02 _maserati yeah im just adding it as an additional replica
17:03 JoeJulian piece of cake
17:05 Rapture joined #gluster
17:06 purpleidea joined #gluster
17:06 purpleidea joined #gluster
17:07 harish_ joined #gluster
17:07 wushudoin| joined #gluster
17:08 hagarth upcoming gluster meetups - 1. http://www.meetup.com/GlusterFS-Silicon-Valley/events/224932563/ at Facebook in Menlo Park, CA. 2. http://www.meetup.com/glusterfs-India/events/222201221/ at Red Hat in Bangalore
17:08 glusterbot Title: GlusterFS Meetup is Back! - GlusterFS Meetup Group of Silicon Valley (Mountain View, CA) - Meetup (at www.meetup.com)
17:12 anil joined #gluster
17:13 hagarth JoeJulian: have you tried gitter?
17:13 atrius joined #gluster
17:13 overclk joined #gluster
17:13 JoeJulian no, I haven't.
17:14 chirino joined #gluster
17:15 hagarth JoeJulian: https://gitter.im/gluster/glusterfs
17:15 glusterbot Title: gluster/glusterfs - Gitter (at gitter.im)
17:15 skoduri joined #gluster
17:16 Rapture joined #gluster
17:16 _maserati Vijay's icon looks like the cookie monster =B
17:17 shyam joined #gluster
17:18 hagarth looks pretty interesting. basically is like persistent IRC.
17:18 hagarth _maserati: it is a BuddhaBrot ;)
17:18 JoeJulian My IRC is persistent. ;)
17:19 hagarth JoeJulian: yes, mine is too when I use screen. Else I sift through botbot.me.
17:19 Gill joined #gluster
17:21 _maserati Still current: http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo  ?
17:21 JoeJulian I use znc and xchat.
17:24 JoeJulian Hmm. there should be a symlink.
17:24 JoeJulian I wonder if I have access to that box.
17:25 jwd joined #gluster
17:26 _maserati yeah that link isnt accessible >.<
17:26 kkeithley What's the problem?
17:26 _maserati doesnt exist
17:26 _maserati im finding the right link now
17:28 kkeithley fixed
17:28 _maserati dawww thanks
17:29 kkeithley http://download.gluster.org/pub/gluster/glusterfs/LATEST/{EPEL.repo,RHEL,CentOS}/glusterfs-epel.repo are the same.  RHEL and CentOS are just symlinks
17:30 nishanth joined #gluster
17:38 _maserati wow my predecessor really didnt want to make this easy for me... he has setup 6 distributed bricks on each server, replicating each... and i need to add another server to replicate each as well
17:38 jcastill1 joined #gluster
17:39 stickyboy joined #gluster
17:40 _maserati how in the hell does 8G + 8G = 42G available to the gluster volume 8|
17:40 _maserati oh man nvm, im an idiot. he set up two volumes xD
17:41 cabillman joined #gluster
17:44 jcastillo joined #gluster
17:52 bennyturns joined #gluster
17:52 calavera joined #gluster
17:53 shyam joined #gluster
17:55 calavera joined #gluster
18:00 shaunm joined #gluster
18:02 _maserati oh dear lord
18:03 _maserati my predecessor has been writing files to the gluster brick itself to one server (as in nothing gets replicated)... is this a catastrophe or fixable?
18:07 JoeJulian _maserati: It's fixable. Create a list of files from the brick (find -type f), then "stat" that file path on the volume mount. That /should/ bring those files in.
18:09 _maserati Can you verify one thing for me... the mount: thisisrealserver.google.com:/dev-volume /mnt/gluster glusterfs
18:09 _maserati would you write files to /mnt/gluster ?
18:10 JoeJulian yes
18:10 _maserati thank you
18:11 _maserati "he" has been writing everything to this mount: /dev/sdh1       /mnt/glusterdev
18:11 _maserati lol
18:33 bennyturns joined #gluster
18:39 ira_ joined #gluster
18:40 plarsen joined #gluster
18:43 eljrax joined #gluster
18:43 R0ok_ joined #gluster
18:53 _maserati I need halp! I've got my mounted volumes on two servers but writes are not being replicated. status shows them as online...
18:55 _maserati there's not even a lock file on the one server thats getting nothing
18:58 _maserati wow
18:58 _maserati nevermind
18:58 _maserati for some reason the mount dropped
18:58 _maserati remounted, looks good
18:59 calavera joined #gluster
18:59 _maserati Peace of mind: Should there be a lost+found directory in a gluster volume ?
19:03 _maserati .... does gluster add /mnt/glusterdev automatically?
19:04 _maserati and is it okay to write to /mnt/glusterdev/oneofmyvolumes ?
19:10 _maserati omg nevermind.... this dude has configs everywhere jacking things up
19:10 _maserati im losing my mind
19:19 JoeJulian _maserati: No, lost+found is created by the ext filesystem. That's one reason your bricks should be in a subdirectory of the mount. Otherwise, that lost+found will be created on the bricks and will have differing gfids by their very nature.
19:19 JoeJulian This should only matter if you ever need to get to anything in the lost+found directory.
19:20 _maserati okay, i got some heavy reconfiguring to do.. -.-
19:20 _maserati thanks for confirming
19:20 JoeJulian That's why you make the big bucks.
19:20 _maserati i inherited this mess!
19:21 _maserati but im glad it introduced me to gluster, im just dissappointed i have to learn gluster via fixing it
19:31 marbu joined #gluster
19:31 JoeJulian Nothing like a little adversity to create expertise.
19:32 cc1 joined #gluster
19:33 cc1 Can anyone point me to docs for doing geo-replication with non-root user? Or does the following thread still apply for 3.7? http://www.gluster.org/pipermail/gluster-users.old/2015-January/020080.html
19:33 glusterbot Title: [Gluster-users] Non-root user geo-replication in 3.6? (at www.gluster.org)
19:35 rotbeard joined #gluster
19:39 mbukatov joined #gluster
19:39 papamoose joined #gluster
19:40 lkoranda joined #gluster
19:40 marbu joined #gluster
19:41 lkoranda_ joined #gluster
19:45 mbukatov joined #gluster
19:47 _maserati Okay, so I have no idea how long node #2 was unmounted from gluster. Does it self-heal automatically? Is there a way I can check the status?
19:49 marbu joined #gluster
19:50 lkoranda joined #gluster
19:54 mbukatov joined #gluster
19:57 JoeJulian _maserati: it does, "gluster volume heal $vol info"
19:58 dlambrig joined #gluster
19:58 JoeJulian cc1: I wonder if http://aravindavk.in/blog/introducing-georepsetup/ does that.
19:58 glusterbot Title: Introducing georepsetup - Gluster Geo-replication Setup Tool (at aravindavk.in)
19:58 _maserati It says Number of entries: 0 under each brick, im assuming thats good?
19:59 lkoranda joined #gluster
20:02 JoeJulian That says that gluster's unclean tracking index is empty. I generally trust but spot-check directory trees and md5 sums.
20:02 _maserati of millions of files!?
20:03 _maserati j/k but when you "spot check" do you check under the bricks of each node or under the gluster mount ?
20:05 dijuremo Do you guys run Supermicro 36 bay chassis for gluster? If so, what do you find to be the optimal configuration in terms of bricks, number of controllers, etc...
20:09 dijuremo Would it be better to get 3 sas/raid controllers and make 3 bricks of 12 drives each? vs say a single controller that connects to all drives?
20:10 _maserati well... obviously
20:10 JoeJulian dijuremo: I've found that jbod 1 disk per brick has been the most resilient.
20:11 _maserati i mean for throughput, more controllers will help
20:11 JoeJulian https://photos.google.com/search/phoenix/photo/AF1QipN7fjywtq06uACjMOaHjMWzCiSnBoIJ2hrkwFqo
20:11 glusterbot Title: Sign in - Google Accounts (at photos.google.com)
20:11 dijuremo JoeJulian: 404 page
20:12 JoeJulian https://goo.gl/photos/iArCm5UAztRMbbi89
20:12 JoeJulian stupid google
20:12 _maserati sexy
20:12 dijuremo JoeJulian: So then you do no raid at all?
20:12 JoeJulian Right.
20:13 _maserati oh i didnt realize you were talking raid. there's really no point to raid with gluster
20:13 JoeJulian Well, there can be.
20:14 _maserati i go full jbod, but if i need more throughput i'll split up disks on diff controllers
20:14 JoeJulian If your spindles can't keep up with your network and that's important to your use case, a raid0 may be beneficial.
20:14 dijuremo So do you distribute across the bricks on the same server and replicate to the other server?
20:14 JoeJulian dijuremo: yes
20:14 _maserati i do yes
20:15 dijuremo But if you loose one drive in one server you loose the whole server or not?
20:15 _maserati not if it's jbod
20:15 _maserati or a protective raid
20:16 _maserati assuming! you have more than 1 node of course
20:16 dijuremo How does gluster handle a brick failure?
20:16 dijuremo Oh wait... distribute is not stripping....
20:16 dijuremo I was thinking stripping... so you only loose the data in that drive...
20:16 JoeJulian If I needed raid for throughput, I would raid0 just enough disks to meet my throughput expectations. That would be like what, 6 disks per raid0? So your  36 bay chassis would be good for 6 bricks. If you lose one drive, you lose a brick but it's replicated so we don't care.
20:17 _maserati but if you replicate each brick, then you dont lose that data
20:17 JoeJulian And no, we don't allow strippers in our datacenter.
20:18 _maserati but there's also risk there... if you're only 2 nodes, and you're doing 6 disk raid-0.... thats... alot of room for losing data
20:18 dijuremo JoeJulian: too bad... boring DC... ;)
20:18 _maserati dijuremo: It's why my backend is a SAN with 15k drives =P
20:18 JoeJulian For anything that's too important to lose, and too big to backup, replica 3.
20:19 dijuremo _maserati: Or if it is that day and you loose the two drives in two servers at the same time...
20:19 _maserati dijuremo: that's what im referring too... hehe
20:19 JoeJulian If it's distributed enough, you're playing the odds.
20:19 _maserati i do have some super micro's though
20:20 dijuremo So gluster will try to distribute the data evenly accross all bricks?
20:20 _maserati and if their anything like yours, drives love to die in them
20:20 JoeJulian Having two bricks, on two servers, that both serve the same replica out the the entire dht can well exceed 8 nines if you have done it right.
20:21 JoeJulian @lucky dht misses are expensive
20:21 glusterbot JoeJulian: https://joejulian.name/blog/dht-misses-are-expensive/
20:21 JoeJulian ^ that explains well how dht works.
20:21 JoeJulian imho
20:21 _maserati you're so damn smart :)
20:21 JoeJulian hehe
20:22 JoeJulian I'm just old.
20:22 JoeJulian Lots of experience.
20:23 dijuremo Why does RH in their guides recommend raid?
20:23 JoeJulian I have no idea. They do things strangely sometimes with no regard for the real world.
20:24 JoeJulian Designing for SLA is simple math. Some of their documents read like it's magic.
20:27 dijuremo So in My case, two node replica, they recommend raid 6....
20:27 dijuremo For 3 way replica they say JBOD is fine
20:27 dijuremo "Red Hat Gluster Storage in JBOD configuration is recommended for highly multi-threaded workloads with sequential reads to large files. For such workloads, JBOD results in more efficient use of disk bandwidth by reducing disk head movement from concurrent accesses. For other workloads, two-way replication with hardware RAID is recommended."
20:28 mhulsman joined #gluster
20:31 JoeJulian Maybe I should write a blog article about backing stores.
20:31 _maserati I'd read it!
20:31 JoeJulian That could apply to both gluster and ceph.
20:33 JoeJulian You want performance? Put your xfs journals on ssd. :)
20:34 _maserati My background is ZFS so my perception of backing stores likely doesnt fit well in the gluster arena
20:34 dijuremo I have good performance except for the small files.. :(
20:34 JoeJulian 3.7?
20:34 dijuremo Yes 3.7.3
20:35 JoeJulian I know they worked on some tweaks that are supposed to make that better in 3.7, but I haven't looked at it.
20:35 dijuremo Doing an ls or find -type f on the brick is 1-2 orders of manitude faster...
20:35 dijuremo than on the gluster volume...
20:35 JoeJulian I've always been of the opinion that if you're optimizing your storage to achieve performance at the front end, you're doing it wrong.
20:36 dijuremo Also since I upgraded to 3.7.3 samba is core dumping....
20:36 _maserati How does one even "optimize" for front end? that makes no sense to me
20:36 JoeJulian dijuremo: if you "echo *" it's instantaneous. What you're seeing is the lookup(), open, read, close, repeat. That's how those tools are written.
20:37 JoeJulian ,,(php) for instance.
20:37 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
20:37 glusterbot --fopen-keep-cache
20:37 JoeJulian dijuremo: have you filed a bug report?
20:37 dijuremo Not yet...
20:37 JoeJulian .. and are you using the vfs?
20:37 dijuremo Yep...
20:39 rwheeler joined #gluster
20:40 dijuremo I am not sure exactly what to look for in the core dump, never tried to read one...
20:40 dijuremo Also cannot easily roll back to the old 3.6.x
20:46 dijuremo JoeJulian: When you say "echo *", do you mean go to a folder with lots of small files and do echo * ?
20:47 dlambrig joined #gluster
20:48 _maserati i beleive that's what he meant yes
20:48 _maserati on the gluster mount
20:50 dijuremo Gotta find me a directory with lots of small files to test, I have been usually trying to do a ls -R or find .winprofile.V2 -type f which is really slow on the mounted gluster volume vs brick
20:51 _maserati i just happened to be doing some find commands, so i tested myself... slow af on the gluster mount
20:52 _maserati oh wait wrong dir
20:52 _maserati it's fast
20:52 _maserati sry
20:53 dijuremo echo * is fast here too
20:54 dijuremo My comparisson is the brick vs the gluster mounted volume...
20:54 dijuremo [root@ysmha01 Cookies]# time ( echo * > /dev/null )
20:54 dijuremo real    0m0.209s
20:55 dijuremo [root@ysmha01 Cookies]# time ( find . -type f > /dev/null )
20:55 dijuremo real    0m0.790s
20:55 _maserati much faster than ls ?
20:55 dijuremo Well... issue is I had already ran it once, so now it is cached...
20:56 dijuremo I have not figured out how long before it will flush the cache..
20:57 dijuremo _maserati: Maybe something like this will help your coworker?
20:58 dijuremo # ls /bricks/
20:58 dijuremo DO_NOT_EVER_WRITE_TO_ANY_OF_THESE_FOLDERS_DIRECTLY  hdds  she  vmstorage
20:58 dijuremo :P
20:58 _maserati dude...
20:58 JoeJulian See, typically when you ls you've got an alias that adds a bunch of stuff like color or other decorators to show you what the file type is, what the permissions are, etc. That requires a fstat call for every item in the directory. That's a lookup, open, fstat64, close for every file.
20:58 JoeJulian find, of course, also has to do that to determine if the directory entry is a directory that needs to be traversed.
20:58 dlambrig joined #gluster
20:58 dijuremo JoeJulian, my point is that an ls in the brick itself takes about 10 seconds for about 10K files from a roaming profile
20:59 dijuremo The same operation on the mounted glusterf volume takes 1-2 mins
20:59 JoeJulian I'm just explaining why.
20:59 _maserati [root@st2411 gluster]# ls
20:59 _maserati DO NOT DELETE THE LOCK FILE
20:59 _maserati lock
20:59 _maserati HE did that.
21:00 amye joined #gluster
21:02 dijuremo # time ( find /bricks/hdds/brick/home/jgibbs/.winprofile.V2 -type f > /dev/null )
21:02 dijuremo real    0m0.765s
21:02 dijuremo When I run find directly on the brick on that guys folder, it is very fast...
21:03 dijuremo .... tick ... tock.... tick .... tock ... still waiting the output in the gluster mounted volume...
21:06 dijuremo time ( find /export/home/jgibbs/.winprofile.V2 -type f > /dev/null )
21:06 dijuremo real    2m33.726s
21:06 dijuremo So the same process on the brick vs gluster has a ridiculous disparity. Would you consider that normal?
21:07 dijuremo # mount | grep export
21:07 dijuremo 10.0.1.7:/export on /export type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)
21:17 cc1 left #gluster
21:18 _maserati JoeJulian: I'm afraid google just won't do here. I finally found the issue. My predecessor set up a glusterfs volume, but never mounted it. Only 1 server has ever received any of the data but it looks like attempts were made to cp the data across. How do I properly/safely go about dropping the gluster volume on node 2 and readding it, mounting, and letting gluster sync? The data on node 2 is completely different than
21:18 _maserati the data on node 1.... Node 1 data matters. Node 2 data can go away.
21:20 dijuremo _maserati: shouldn't you just remove the node from the volume, then format it and then add it back as a replica?
21:21 _maserati actually what's the problem is, he was writing to that glusterfs directly... so both bricks are fcked. I guess i gotta cp all the data out, remake the volume, and cp it back in?
21:21 JoeJulian _maserati: Just stop any glusterd on server2, format the bad brick, create the volume-id ,,(extended attribute) again, start the brick (gluster volume start $vol force) then heal..full.
21:21 glusterbot _maserati: To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}
21:21 _maserati directly to the brick, that is
21:21 JoeJulian s/glusterd/glusterfsd/
21:21 glusterbot What JoeJulian meant to say was: _maserati: Just stop any glusterfsd on server2, format the bad brick, create the volume-id ,,(extended attribute) again, start the brick (gluster volume start $vol force) then heal..full.
21:22 JoeJulian I'm assuming you know how to read and set extended attributes.
21:22 _maserati no =(
21:23 JoeJulian (with root permissions) getfattr -n trusted.glusterfs.volume-id
21:23 _maserati lol holy crap i have no idea what that means
21:24 JoeJulian on the brick root for server1
21:24 _maserati i did your previous suggestion on 1 file
21:24 _maserati and it comes back with alot of stuff
21:24 _maserati what's that pastebin alt u guys like?
21:24 JoeJulian yeah, it can all be ignored. The act of reading it triggers a self-heal check.
21:24 JoeJulian @paste
21:24 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
21:25 JoeJulian I love termbin because it's distro independent.
21:27 JoeJulian So anyway... that getfattr gets you the volume-id, something like trusted.glusterfs.volume-id=0x702c29cbf63d4d4fa6a5a4750def68fd. Then after you've formatted the brick on server2 you need to set the volume id, setfattr -n trusted.glusterfs.volume-id -v 0x702c29cbf63d4d4fa6a5a4750def68fd $brick_root
21:28 _maserati getfattr -n trusted.glusterfs.c1fe88fe-e2e7-4122-8810-6fdacfc3ef5f
21:28 _maserati Usage: getfattr [-hRLP] [-n name|-d] [-e en] [-m pattern] path...
21:29 JoeJulian (with root permissions) getfattr -n trusted.glusterfs.volume-id $brick_root
21:29 JoeJulian and volume-id is actually volume-id, not some uuid.
21:29 _maserati i assumed u meant from gluster volume info, the volume id =D
21:30 JoeJulian Though that uuid works if you remove the dashes and precede it with 0x
21:30 JoeJulian ... in the setfattr command on the formatted brick.
21:31 JoeJulian When I want you to substitute something, I try to be consistent about saying things like $vol or $brick_root so you know that you have to substitute and a copy-paste will fail.
21:31 _maserati sry for pastebin: http://pastebin.com/7fgiAGpW
21:31 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:32 _maserati I know that but in my defense, your first reply didn't show the $brick_root :)
21:33 JoeJulian You're right. I'm sorry for being distracted. I'm trying to write a ceph test while I'm doing this. That would still work if you used the 0s string.
21:33 _maserati I really appreciate your willingness to help
21:33 julim joined #gluster
21:34 JoeJulian But if you wanted to see the hex version, getfattr -n trusted.glusterfs.volume-id -e hex $brick_root
21:34 dlambrig joined #gluster
21:34 _maserati So, keep note of that output and use set when im done formatting?
21:34 JoeJulian right
21:34 _maserati roger
21:34 _maserati thank you
21:54 jrdn_ joined #gluster
22:03 n-st joined #gluster
22:07 n-st joined #gluster
22:09 cliluw joined #gluster
22:11 frakt joined #gluster
22:17 badone_ joined #gluster
22:26 Mr_Psmith joined #gluster
22:30 virusuy hey guys
22:31 virusuy i have a duplicated replicated 4 nodes volume
22:31 virusuy and two of them (replica between each) says peer rejected
22:53 plarsen joined #gluster
22:54 TheCthulhu joined #gluster
23:21 Mr_Psmith joined #gluster
23:24 mjrosenb joined #gluster
23:34 frakt joined #gluster
23:43 pocketprotector joined #gluster
23:43 pocketprotector left #gluster
23:49 edwardm61 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary