Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:28 camg joined #gluster
00:58 vmallika joined #gluster
01:00 DV__ joined #gluster
01:01 haomaiwang joined #gluster
01:16 EinstCrazy joined #gluster
01:19 EinstCra_ joined #gluster
01:32 nehar joined #gluster
01:48 baojg joined #gluster
01:48 itisravi joined #gluster
02:01 haomaiwa_ joined #gluster
02:03 ira_ joined #gluster
02:08 Marbug joined #gluster
02:08 harish__ joined #gluster
02:33 ira_ joined #gluster
02:49 prasanth joined #gluster
02:54 7GHAANRGV joined #gluster
03:01 haomaiwa_ joined #gluster
03:04 sakshi joined #gluster
03:07 Lee1092 joined #gluster
03:17 overclk joined #gluster
03:18 kshlm joined #gluster
03:21 Gnomethrower joined #gluster
03:23 primusinterpares joined #gluster
03:24 jhyland joined #gluster
03:26 kaushal_ joined #gluster
03:29 nishanth joined #gluster
03:42 DV joined #gluster
03:43 overclk_ joined #gluster
03:54 shubhendu joined #gluster
04:01 haomaiwa_ joined #gluster
04:01 prasanth joined #gluster
04:03 RameshN joined #gluster
04:03 prasanth joined #gluster
04:06 Apeksha joined #gluster
04:07 itisravi joined #gluster
04:15 baojg joined #gluster
04:18 nbalacha joined #gluster
04:26 gem joined #gluster
04:26 atinm joined #gluster
04:28 overclk joined #gluster
04:31 itisravi joined #gluster
04:33 ppai joined #gluster
04:36 coredump joined #gluster
04:37 hackman joined #gluster
04:41 jiffin joined #gluster
04:43 camg joined #gluster
04:44 jiffin1 joined #gluster
04:46 prasanth joined #gluster
04:51 aspandey joined #gluster
04:58 jiffin joined #gluster
05:01 haomaiwang joined #gluster
05:03 jiffin1 joined #gluster
05:04 poornimag joined #gluster
05:08 ashiq_ joined #gluster
05:13 ndarshan joined #gluster
05:15 Neilo joined #gluster
05:17 Neilo We are planning to build SSD servers, in a 2 or 4 node replica of 2, with the aim of getting "600MB/s (Megabytes per second) of write performance" is this speed possible using SMB or NFS to gluster? Any experience of reaching this kind of write speed? Or should be look to use another technology to sync to our existing gluster asynchronously?
05:26 ramteid joined #gluster
05:29 Manikandan joined #gluster
05:29 prasanth joined #gluster
05:31 anil_ joined #gluster
05:31 karthik___ joined #gluster
05:35 baojg joined #gluster
05:38 spalai joined #gluster
05:38 arcolife joined #gluster
05:39 hgowtham joined #gluster
05:40 spalai joined #gluster
05:40 overclk joined #gluster
05:42 pur_ joined #gluster
05:42 karnan joined #gluster
05:43 jwd joined #gluster
05:46 Bhaskarakiran joined #gluster
05:47 prasanth joined #gluster
05:48 skoduri joined #gluster
05:53 jiffin1 joined #gluster
05:54 prasanth_ joined #gluster
05:55 overclk joined #gluster
05:55 nishanth joined #gluster
05:58 prasanth joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 rafi1 joined #gluster
06:03 spalai left #gluster
06:04 spalai joined #gluster
06:04 Saravanakmr joined #gluster
06:05 jiffin1 joined #gluster
06:07 kotreshhr joined #gluster
06:11 rastar joined #gluster
06:13 post-factum Neilo: do you have at least 10GbE interconnect?
06:14 Gaurav_ joined #gluster
06:16 Neilo post-factum: yes, we have 10GbE
06:16 kshlm joined #gluster
06:16 jiffin1 joined #gluster
06:21 jiffin1 joined #gluster
06:22 mhulsman joined #gluster
06:24 spalai joined #gluster
06:24 aravindavk joined #gluster
06:25 post-factum Neilo: client-server connection is 10GbE as well?
06:27 atalur joined #gluster
06:30 Manikandan joined #gluster
06:31 Neilo post-factum: yes, will be 10GbE from client to server.
06:31 harish_ joined #gluster
06:33 jiffin joined #gluster
06:36 [Enrico] joined #gluster
06:36 jiffin1 joined #gluster
06:36 ahino joined #gluster
06:37 overclk joined #gluster
06:37 sac joined #gluster
06:39 beeradb joined #gluster
06:41 vmallika joined #gluster
06:42 jri joined #gluster
06:43 jiffin joined #gluster
06:44 theron joined #gluster
06:52 post-factum Neilo: what is the workload for such a cluster?
06:54 ashiq joined #gluster
06:55 [Enrico] joined #gluster
07:01 haomaiwa_ joined #gluster
07:07 jwd joined #gluster
07:12 baojg joined #gluster
07:18 unlaudable joined #gluster
07:19 ahino joined #gluster
07:20 jugaad joined #gluster
07:20 jugaad HI all
07:20 Slashman joined #gluster
07:21 jugaad Does anyone know of a way to auto failover with a distributed setup?
07:22 harish_ joined #gluster
07:26 Bhaskarakiran joined #gluster
07:26 spalai joined #gluster
07:29 ahino joined #gluster
07:34 jiffin joined #gluster
07:38 madnexus joined #gluster
07:38 Ulrar One of my nodes just had a weird problem, and when I try to start a full heal I get that in the logs : 0-glusterfs: Couldn't get xlator xl-0
07:38 Ulrar Googled it a bit but looks like pretty much find only irc logs of people who didn't get answers
07:39 Ulrar May be I'll be luckier
07:46 fsimonce joined #gluster
07:52 mhulsman joined #gluster
07:56 prasanth joined #gluster
08:00 prasanth joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 ctria joined #gluster
08:07 nbalacha joined #gluster
08:10 vmallika joined #gluster
08:12 [diablo] joined #gluster
08:23 mhulsman joined #gluster
08:27 hackman joined #gluster
08:28 prasanth joined #gluster
08:43 robb_nl joined #gluster
08:51 juhaj joined #gluster
08:53 Iouns joined #gluster
08:53 Bhaskarakiran joined #gluster
08:58 jwaibel joined #gluster
09:01 atalur_ joined #gluster
09:01 haomaiwa_ joined #gluster
09:01 skoduri_ joined #gluster
09:02 rouven joined #gluster
09:02 Rasathus joined #gluster
09:06 EinstCrazy joined #gluster
09:07 madnexus joined #gluster
09:11 kbyrne joined #gluster
09:11 nishanth joined #gluster
09:17 paul98 joined #gluster
09:18 hackman joined #gluster
09:19 paul98 hi i'm trying to install gluster on a centos install and getting the following error. http://pastebin.centos.org/42711/
09:22 atinm paul98, http://linoxide.com/file-system/instal​l-configure-glusterfs-centos-7-aarch64 should help
09:23 mhulsman1 joined #gluster
09:31 Neilo_ joined #gluster
09:33 Neilo_ Post-factum: our server will be saving video and audio from several HD cameras, as they record a live event. Our editors want to edit content ASAP, on our existing gluster cluster.
09:35 post-factum Neilo_: oh, that is quite a heavy workload
09:35 post-factum Neilo_: consider enabling sharding for gluster volume in order to balance the load
09:36 post-factum Neilo_: what is the average size of videofile?
09:38 ashiq joined #gluster
09:40 robb_nl joined #gluster
09:46 Neilo_ joined #gluster
09:48 paul98 atinm: ha i done some google found it and noticed i was missing the repo so added that and worked fine! thanks :)
09:49 TvL2386 joined #gluster
09:57 petan joined #gluster
09:58 Neilo_ joined #gluster
09:59 Neilo_ post-factum: each file would be 20 to 50Gb in size.
10:01 haomaiwa_ joined #gluster
10:02 skoduri joined #gluster
10:08 ashiq joined #gluster
10:39 baojg joined #gluster
10:42 hgowtham joined #gluster
10:50 ira joined #gluster
10:54 ira_ joined #gluster
10:55 jugaad Does anyone have any understanding of what happens when I issue an "ls" on a glusterfs mount?
10:57 post-factum Neilo: you deinitely want sharding
10:57 post-factum Neilo: sharding splits large files into smaller chunks, it allows to avoid global locking on healing and spreads load across distributed bricks
10:58 post-factum jugaad: you could use strace to watch what happens on ls invokatio
10:58 post-factum jugaad: like "strace ls /mnt/gluster"
10:59 jugaad I have had a look at that, and it seems like it is an LDAP lookup
10:59 jugaad we are tying to mount /home with gluster
10:59 muneerse joined #gluster
10:59 jugaad we have about 2-500 users, which is a fair few directories
11:00 jugaad if I chmod all the directories to root, "ls" on the glusterfs takes literally 1 second (about half the time as an NFS mount)
11:00 jugaad if I chmod all the directories to their real owners, i.e. chmod user1 user1/, user2 user2/ etc. then this takes 5 - 10 minutes
11:00 post-factum jugaad: it should be faster on second invokation due to cache
11:01 jugaad it always takes the same amount of time, roughly
11:01 post-factum jugaad: are you talking about ls?
11:01 haomaiwa_ joined #gluster
11:01 jugaad yeah, just a simple "ls"
11:01 post-factum jugaad: and your glusterfs version is?..
11:02 jugaad 3.7.9
11:02 post-factum jugaad: how many files the folder has?
11:03 jugaad almost 1800 - woah, didn't realise there were that many!
11:03 jugaad 1791 directories within /home to be precise
11:04 post-factum jugaad: do you have some feeling about it being wrong :)?
11:04 jugaad yes
11:04 Neilo_ joined #gluster
11:04 jugaad mounting it over nfs, it takes about 2 seconds
11:04 jugaad mounting over glusterfs takes about 5 - 10 minutes
11:04 jugaad if all the directories are owned by root, this takes 2 seconds again
11:05 jugaad it cannot be that slow? :-s
11:05 post-factum jugaad: emm, do you mean, performing "mount" command takes 5 mins?
11:05 jugaad no, performing the "ls" takes 5 mins
11:05 post-factum that is definitely wrong
11:05 post-factum could you please do some volume profiling?
11:06 post-factum @profiling
11:06 post-factum glusterbot: where are you?
11:07 post-factum jugaad: https://gluster.readthedocs.org/en/latest/A​dministrator%20Guide/Monitoring%20Workload/
11:07 glusterbot Title: Monitoring Workload - Gluster Docs (at gluster.readthedocs.org)
11:07 jugaad Let me have a read :-) cheers!
11:08 post-factum jugaad: also, this is for client: sudo setfattr -n trusted.io-stats-dump -v /tmp/file-with-client-stats.txt /mnt/glusterfs_mountpoint
11:08 post-factum @profile
11:09 post-factum glusterbot: profile
11:09 post-factum @learn profiling as To monitor your GlusterFS workload, please, read https://gluster.readthedocs.org/en/latest/A​dministrator%20Guide/Monitoring%20Workload/ first. Also, consider gathering client-side stats with 'setfattr -n trusted.io-stats-dump -v /tmp/file-with-client-stats.txt /mnt/glusterfs_mountpoint' command.
11:09 glusterbot post-factum: The operation succeeded.
11:10 post-factum @profiling
11:10 glusterbot post-factum: To monitor your GlusterFS workload, please, read https://gluster.readthedocs.org/en/latest/A​dministrator%20Guide/Monitoring%20Workload/ first. Also, consider gathering client-side stats with 'setfattr -n trusted.io-stats-dump -v /tmp/file-with-client-stats.txt /mnt/glusterfs_mountpoint' command.
11:10 post-factum better
11:10 post-factum glusterbot++
11:10 glusterbot post-factum: glusterbot's karma is now 11
11:11 Neilo_ Post-factum:  nice, what shard sizes should we try first?
11:11 muneerse joined #gluster
11:11 post-factum Neilo_: according to gluster devs, 512M is well-tested. personally me tried 128M with no visible issues
11:12 post-factum Neilo_: how many files do you have within one folder?
11:12 jugaad post-factum: turned on profiling, getting a *lot* of "LOOKUP" calls
11:12 post-factum jugaad: I believe you should send your stats to gluster mailing list
11:13 post-factum jugaad: describing the setup and workload as well
11:13 jugaad I can do
11:13 jugaad it keeps resetting the "No. of calls" could it be running out of cache?
11:13 muneerse joined #gluster
11:13 post-factum looks like integer overflow :)
11:15 post-factum jugaad: you could also post your volume options visible via "gluster show volume info" command
11:16 harish_ joined #gluster
11:17 jugaad post-factum++
11:17 glusterbot jugaad: post-factum's karma is now 5
11:17 jugaad Thanks very much for your help :-)
11:17 hchiramm joined #gluster
11:17 hchiramm_ joined #gluster
11:19 post-factum jugaad: np
11:23 Wizek joined #gluster
11:26 mhulsman joined #gluster
11:33 scobanx joined #gluster
11:34 scobanx Hi, Is there a document that describes how to replace a failed server with same name and IP address? I am using 3.7.9 with disperse volume.
11:44 atinm scobanx, http://www.gluster.org/community/docum​entation/index.php/Gluster_3.4:_Brick_​Restoration_-_Replace_Crashed_Server
11:45 skoduri joined #gluster
11:46 scobanx atinm: thanks, cant find that in http://gluster.readthedocs.org/e​n/latest/Administrator%20Guide/
11:46 glusterbot Title: Index - Gluster Docs (at gluster.readthedocs.org)
12:05 Debloper joined #gluster
12:06 DV joined #gluster
12:10 5EXAAIT2S joined #gluster
12:16 haomaiwa_ joined #gluster
12:16 unclemarc joined #gluster
12:25 mhulsman1 joined #gluster
12:30 chirino_m joined #gluster
12:33 shaunm joined #gluster
12:44 d4n13L joined #gluster
12:45 Bhaskarakiran joined #gluster
12:47 [Enrico] joined #gluster
12:50 d4n13L joined #gluster
12:51 worzieznc joined #gluster
12:59 johnmilton joined #gluster
12:59 mhulsman joined #gluster
13:01 haomaiwa_ joined #gluster
13:03 jermudgeon joined #gluster
13:09 EinstCrazy joined #gluster
13:12 plarsen joined #gluster
13:22 theron joined #gluster
13:22 julim joined #gluster
13:23 nbalacha joined #gluster
13:24 kdhananjay joined #gluster
13:27 anti[Enrico] joined #gluster
13:31 DV joined #gluster
13:39 kdhananjay joined #gluster
13:41 DV joined #gluster
13:50 EinstCrazy joined #gluster
13:51 overclk joined #gluster
13:57 chirino_m joined #gluster
14:01 haomaiwa_ joined #gluster
14:03 skoduri joined #gluster
14:15 vmallika joined #gluster
14:22 vmallika joined #gluster
14:37 overclk joined #gluster
14:40 nbalacha joined #gluster
14:42 Neilo_ joined #gluster
14:43 Neilo_ Post-factum++
14:43 glusterbot Neilo_: Post-factum's karma is now 6
14:44 post-factum hey, it's not me :)
14:44 post-factum I'm with little "p" :)
14:45 Neilo_ post-factum: thanks for the info. The video folder would have 16 files, one per camera.
14:45 Neilo_ post-factum++
14:45 glusterbot Neilo_: post-factum's karma is now 7
14:45 Neilo_ :)
14:45 post-factum hmm, it seems it is case-insensitive
14:46 post-factum nvm
14:46 post-factum ok, 16 files per folder
14:46 post-factum 35G per file in average
14:47 post-factum so, given one folder could have up to 300 files with no impact on lookup...
14:47 post-factum or up to 1000
14:47 post-factum yep, you can stick to 512M shards
14:49 post-factum Neilo_: what is volume layout? distributed-replicated?
14:49 Neilo_ Will do. Has there been any similar performing systems that you know of? 600 Mb per sec ballpark speed?
14:50 Neilo_ Replica 2. This is the planning stage for the fast cluster, so open to suggestions.
14:53 post-factum Neilo_: 2 bricks only?
14:53 Bhaskarakiran joined #gluster
14:55 Neilo_ 4 servers, 8 bricks.
14:57 post-factum aha, so you could choose between 128M, 256M and 512M shard
14:58 post-factum and no, I'm not aware about such a workload anywhere
14:58 post-factum wil lsee how it goes for you
14:58 post-factum *will see
14:58 Bhaskarakiran joined #gluster
14:59 Neilo_ Yeah, we have a unique use case. Is the write speed seen elsewhere though? That you know of?
15:01 haomaiwang joined #gluster
15:03 Neilo_ Next question would be... How best to copy this data to our existing editor cluster, in our main office? Over a 10GbE link, 20miles away.
15:04 Neilo_ That's it though :)
15:11 coredump joined #gluster
15:12 hamiller joined #gluster
15:20 coredump joined #gluster
15:21 kpease joined #gluster
15:22 anil joined #gluster
15:23 d0nn1e joined #gluster
15:29 mhulsman joined #gluster
15:29 overclk joined #gluster
15:35 tswartz joined #gluster
15:46 jugaad joined #gluster
15:46 jugaad has anyone configured auto.fs to use gluster?
15:57 coredump joined #gluster
16:00 coredump|br joined #gluster
16:01 haomaiwa_ joined #gluster
16:01 mhulsman joined #gluster
16:08 rafi joined #gluster
16:25 Bhaskarakiran joined #gluster
16:28 mhulsman joined #gluster
16:37 deniszh joined #gluster
16:49 ahino joined #gluster
16:50 dka__ joined #gluster
16:50 dka__ Hi
16:50 glusterbot dka__: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:51 dka__ I need help with my initiation in gluster:  http://stackoverflow.com/questions/3636​1483/example-marathon-json-deployment-f​ile-when-using-glusterfs-volume-driver
16:51 glusterbot Title: docker - example marathon json deployment file when using glusterfs volume driver - Stack Overflow (at stackoverflow.com)
16:52 rafi joined #gluster
17:00 johnmilton joined #gluster
17:00 Rasathus_ joined #gluster
17:01 haomaiwa_ joined #gluster
17:03 CP|AFK joined #gluster
17:04 anil joined #gluster
17:05 Manikandan joined #gluster
17:11 anil joined #gluster
17:11 ahino joined #gluster
17:15 graper joined #gluster
17:15 nishanth joined #gluster
17:16 papamoose1 joined #gluster
17:17 papamoose1 joined #gluster
17:18 graper after following the quickstart, can I mount the volumn on a client pointed to either of the servers?
17:19 dka__ I need help with my initiation in gluster:  http://stackoverflow.com/questions/3636​1483/example-marathon-json-deployment-f​ile-when-using-glusterfs-volume-driver
17:19 glusterbot Title: docker - example marathon json deployment file when using glusterfs volume driver - Stack Overflow (at stackoverflow.com)
17:21 camg joined #gluster
17:25 hackman joined #gluster
17:28 johnmilton joined #gluster
17:29 post-factum Neilo: rsync?
17:29 dka__ post-factum, what ?
17:32 graper @dka__: I think that was ment for another user
17:38 dka__ graper, I know I am just trying to catch up some real user to get some support for gluster :)
17:38 dka__ can you help ? it's the stackoverflow link
17:39 dka__ I want to test glusterfs fs docker driver and I am stuck
17:41 graper my company doesn’t work with docker, so i wouldn’t know how to help with it, sorry
17:41 haomaiwa_ joined #gluster
17:43 * post-factum neither
17:44 graper would setting up HAProxy in front of the gluster servers be prudent?
17:47 gem joined #gluster
17:52 rafi joined #gluster
17:57 shubhendu joined #gluster
17:58 om joined #gluster
18:01 haomaiwa_ joined #gluster
18:06 ahino joined #gluster
18:14 coredump joined #gluster
18:19 mhulsman joined #gluster
18:21 jwd joined #gluster
18:28 nathwill joined #gluster
18:46 skylar joined #gluster
18:49 rafi1 joined #gluster
18:52 mhulsman joined #gluster
18:59 DJVG joined #gluster
18:59 DJVG Hey all. We're recovering from a power spike and power loss and all the data is sync again but I have a lot of this kind of errors in the logs: client-rpc-fops.c:2774:client3_3_lookup_cbk] 0-uf0-client-0: remote operation failed: No such file or directory.
19:00 DJVG I think this causes delayed heal because sometimes it takes up to an hour for the storage machines to be in sync again
19:01 DJVG I think this is because the data he is trying to heal is unavailable from any machine, the data is lost during the outage
19:01 haomaiwa_ joined #gluster
19:01 DJVG If I perform the heal command manually it resolves the issue at that moment but when normal files are added/changed the heal process looks really slow again
19:03 DJVG I don't really know where he get's thos paths from, they don't show up in the gluster volume heal uf0 info list
19:04 DJVG And while it's working I'm still getting errors from our monitoring that the healcount is too high for too long and we didn't have this problem before the outage
19:07 post-factum graper: haproxy is useless for glusterfs
19:08 papamoose joined #gluster
19:09 billputer joined #gluster
19:11 nishanth joined #gluster
19:12 daMaestro joined #gluster
19:30 rafi joined #gluster
19:30 madnexus joined #gluster
19:39 dka__ joined #gluster
19:48 graper @post-factum: thanks
19:51 graper the quickstart guide has a mount pointing to server1.  Is there something that tells the mount to connect to multiple servers for failover?
19:53 graper i’m reading the “Setting up clients” and know about the volfile, but it looks like the fstab options don’t have it.
19:59 ahino joined #gluster
20:01 haomaiwa_ joined #gluster
20:01 post-factum graper: you may use round-robin dns or specifying failover server directly. however, after connecting to server client will learn about other servers and use them as failover automatically
20:14 graper i think my concern revolves around the fstab not having the ability to specifiy the backupvolfile-server.  I imagine dns tricks like round-robin will help with that though. that’s a good tip.
20:14 karnan joined #gluster
20:23 klfwip joined #gluster
20:25 nathwill joined #gluster
20:31 papamoose1 left #gluster
20:41 Rasathus joined #gluster
20:44 nathwill joined #gluster
20:52 post-factum graper: you could definitely specify mount option backup-volfile-servers= in fstab
20:53 graper ok, just didn’t see that in the documentation
20:53 post-factum graper: or backupvolfile-server=
20:53 post-factum graper: man mount.glusterfs
21:01 haomaiwa_ joined #gluster
21:03 ju5t joined #gluster
21:12 CP|AFK joined #gluster
21:15 rafi joined #gluster
21:20 ahino joined #gluster
21:33 nathwill joined #gluster
22:01 haomaiwa_ joined #gluster
22:21 gbox joined #gluster
22:21 shyam joined #gluster
22:50 gbox Is there a way to manually run the self-heal daemon?
22:53 gbox The self-heal daemon for a 2*2 dist-repl volume stopped functioning.  `gluster volume status` shows everything online but `gluster volume heal <volname> info` hangs
23:01 haomaiwa_ joined #gluster
23:02 kpease joined #gluster
23:18 gbox `gluster volume heal <volname> info` slowly fills cli.log with "[socket.c:2355:socket_event_handler] 0-transport: disconnecting now"
23:20 tswartz left #gluster
23:24 gbox OK one peer has dropped out completely.  How can I add it back?
23:26 gbox Nevermind just weird `gluster peer status` output.
23:55 ahino joined #gluster
23:56 theron joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary