Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-25

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:16 beeradb joined #gluster
00:33 plarsen joined #gluster
00:43 hi11111 joined #gluster
00:47 luizcpg joined #gluster
00:57 haomaiwang joined #gluster
00:58 haomaiwang joined #gluster
01:01 haomaiwang joined #gluster
01:30 EinstCrazy joined #gluster
01:33 plarsen joined #gluster
01:38 Lee1092 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 haomaiwang joined #gluster
02:04 julim joined #gluster
02:04 beeradb joined #gluster
02:23 harish_ joined #gluster
02:33 beeradb joined #gluster
02:45 siel joined #gluster
02:47 harish joined #gluster
03:01 haomaiwang joined #gluster
03:10 prasanth joined #gluster
03:18 DV_ joined #gluster
03:28 Siavash_ joined #gluster
03:44 atinmu joined #gluster
03:47 hagarth joined #gluster
03:54 raghug joined #gluster
03:54 kdhananjay joined #gluster
03:58 nehar joined #gluster
04:00 itisravi joined #gluster
04:01 haomaiwang joined #gluster
04:03 nbalacha joined #gluster
04:04 ppai joined #gluster
04:07 overclk joined #gluster
04:14 aravindavk joined #gluster
04:21 arcolife joined #gluster
04:21 nbalacha joined #gluster
04:27 beeradb joined #gluster
04:31 gowtham joined #gluster
04:31 atalur joined #gluster
04:36 shubhendu joined #gluster
04:37 RameshN joined #gluster
04:37 shubhendu joined #gluster
04:38 sakshi joined #gluster
04:41 kotreshhr joined #gluster
05:01 haomaiwang joined #gluster
05:10 poornimag joined #gluster
05:11 Apeksha joined #gluster
05:14 hgowtham joined #gluster
05:16 ramky joined #gluster
05:17 overclk joined #gluster
05:18 Gnomethrower joined #gluster
05:24 Bhaskarakiran joined #gluster
05:26 aspandey joined #gluster
05:28 sakshi joined #gluster
05:30 ashiq joined #gluster
05:32 Manikandan joined #gluster
05:35 rastar joined #gluster
05:46 liewegas joined #gluster
05:53 karthik___ joined #gluster
05:54 rafi joined #gluster
06:01 haomaiwang joined #gluster
06:02 ppai joined #gluster
06:09 skoduri joined #gluster
06:11 kovshenin joined #gluster
06:11 prasanth joined #gluster
06:12 prasanth joined #gluster
06:16 spalai joined #gluster
06:16 kovshenin joined #gluster
06:17 nthomas joined #gluster
06:18 pur__ joined #gluster
06:19 shubhendu_ joined #gluster
06:20 jiffin joined #gluster
06:24 jtux joined #gluster
06:24 Siavash_ joined #gluster
06:25 hackman joined #gluster
06:28 atalur joined #gluster
06:31 hchiramm joined #gluster
06:45 Gnomethrower joined #gluster
06:47 karnan joined #gluster
06:51 overclk joined #gluster
06:53 overclk joined #gluster
06:54 jri joined #gluster
06:59 shubhendu joined #gluster
06:59 jiffin joined #gluster
07:01 haomaiwang joined #gluster
07:05 deniszh joined #gluster
07:06 kdhananjay joined #gluster
07:09 shubhendu joined #gluster
07:11 ivan_rossi joined #gluster
07:17 rafi joined #gluster
07:18 rafi_kc joined #gluster
07:27 rafi1 joined #gluster
07:29 kblin joined #gluster
07:30 kblin hi folks
07:31 kblin I'm looking into setting up gluster on a bunch of older servers to get easier access to disk space in that isolated rack.
07:32 kblin now, it just so happens that I have three servers that have significant storage capacity, and I don't really need replica=3
07:33 kblin so I'm wondering what the best way to configure the volumes would be
07:35 eKKiM_ joined #gluster
07:35 kblin I gues I could do 3 bricks (1-2), (1-3), (2-3), and create a volume out of the three
07:35 kblin any other option that I'm missing?
07:35 anil_ joined #gluster
07:37 post-factumkb kblin: what is you task you try to solve?
07:39 kblin post-factumkb: I have three servers with ~7 TB each, I want a distributed FS between them. It'd be nice if that filesystem was larger than 7 TB, in other words, I don't need three copies of every file in there
07:41 post-factumkb kblin: then yes, you would go with distributed-replicated setup with circular replicas 1-2, 2-3, 3-1
07:41 rafi_kc joined #gluster
07:42 Wizek joined #gluster
07:48 gem joined #gluster
07:56 ctria joined #gluster
08:00 m0zes joined #gluster
08:01 haomaiwang joined #gluster
08:02 Wizek joined #gluster
08:02 atalur joined #gluster
08:03 ctrianta joined #gluster
08:11 ctrianta joined #gluster
08:11 Ulrar kblin: You don't have to use replica
08:12 Wizek_ joined #gluster
08:12 jgjorgji joined #gluster
08:12 karthik___ joined #gluster
08:14 Dasiel joined #gluster
08:15 kblin Ulrar: well, having 2 copies seems like a good idea
08:30 nbalacha joined #gluster
08:32 [Enrico] joined #gluster
08:32 prasanth joined #gluster
08:33 theeboat joined #gluster
08:33 theeboat i've been reading up on gluster and im wondering whether the bottom level being hardware raid is a necessity?
08:34 nbalacha joined #gluster
08:37 Dasiel joined #gluster
08:41 post-factum theeboat: it depends
08:42 theeboat my current setup doesn't have a raid card. just using HBAs. Considered using mdadm but then I started to wonder whether the raid aspect is actually needed
08:43 haomaiwang joined #gluster
08:46 harish_ joined #gluster
08:50 olia joined #gluster
08:54 post-factum for us, having replica 2 pver 2 nodes raid is a must
08:54 post-factum if you have replica 3, i'd go without raid
08:55 theeboat thanks for your help
08:56 Dasiel joined #gluster
08:59 rafi joined #gluster
09:01 haomaiwang joined #gluster
09:01 Dasiel joined #gluster
09:16 atalur joined #gluster
09:22 skoduri joined #gluster
09:26 raghug joined #gluster
09:37 rafi1 joined #gluster
09:47 [Enrico] joined #gluster
09:52 kovshenin joined #gluster
09:55 Saravanakmr joined #gluster
09:57 olia left #gluster
10:01 haomaiwang joined #gluster
10:04 karnan joined #gluster
10:09 EinstCrazy joined #gluster
10:13 Debloper joined #gluster
10:14 mpietersen joined #gluster
10:15 Olia_ joined #gluster
10:16 itisravi joined #gluster
10:16 Olia__ joined #gluster
10:19 muneerse joined #gluster
10:34 ndarshan joined #gluster
10:38 pfactum joined #gluster
10:43 JesperA joined #gluster
10:43 atinm joined #gluster
10:47 pavelion joined #gluster
10:47 skoduri joined #gluster
10:59 arcolife joined #gluster
11:01 haomaiwang joined #gluster
11:07 overclk joined #gluster
11:10 Manikandan_ joined #gluster
11:13 rafi joined #gluster
11:23 Biopandemic joined #gluster
11:29 johnmilton joined #gluster
11:30 Biopandemic joined #gluster
11:41 rafi1 joined #gluster
11:44 gowtham joined #gluster
11:45 mowntan joined #gluster
11:45 mowntan joined #gluster
11:45 mowntan joined #gluster
11:45 mowntan joined #gluster
11:50 guhcampos joined #gluster
11:53 Biopandemic joined #gluster
11:53 raghug joined #gluster
11:54 pavelion_ joined #gluster
11:54 karnan joined #gluster
11:56 nthomas joined #gluster
11:56 ramky joined #gluster
11:58 rastar Gluster community meeting starts in 2 minutes in #gluster-meeting
12:03 [Enrico] joined #gluster
12:07 atinm joined #gluster
12:14 guhcampos joined #gluster
12:16 DV_ joined #gluster
12:16 overclk joined #gluster
12:20 haomaiwang joined #gluster
12:24 ira joined #gluster
12:27 aspandey_ joined #gluster
12:37 rafi joined #gluster
12:39 Biopandemic joined #gluster
12:41 arcolife joined #gluster
12:44 B21956 joined #gluster
12:54 guhcampos joined #gluster
12:54 unclemarc joined #gluster
12:59 plarsen joined #gluster
13:01 haomaiwang joined #gluster
13:02 nthomas joined #gluster
13:02 karnan joined #gluster
13:05 kovshenin joined #gluster
13:05 ppai joined #gluster
13:08 chirino_m joined #gluster
13:09 ivan_rossi left #gluster
13:11 luizcpg joined #gluster
13:12 guhcampos_ joined #gluster
13:15 kkeithley joined #gluster
13:17 nehar joined #gluster
13:21 nthomas joined #gluster
13:26 luizcpg joined #gluster
13:30 ira joined #gluster
13:33 rafi1 joined #gluster
13:33 jiffin1 joined #gluster
13:37 plarsen joined #gluster
13:40 [Enrico] joined #gluster
13:41 ravana_2 joined #gluster
13:48 beeradb joined #gluster
13:56 jiffin1 joined #gluster
13:58 skylar joined #gluster
14:01 haomaiwang joined #gluster
14:01 jiffin joined #gluster
14:02 nbalacha joined #gluster
14:11 Apeksha joined #gluster
14:12 spalai joined #gluster
14:14 dlambrig_ joined #gluster
14:15 hagarth joined #gluster
14:24 dgandhi joined #gluster
14:25 dgandhi joined #gluster
14:26 rwheeler joined #gluster
14:27 unforgiven512 joined #gluster
14:28 unforgiven512 joined #gluster
14:28 unforgiven512 joined #gluster
14:29 unforgiven512 joined #gluster
14:36 wushudoin joined #gluster
14:37 wushudoin joined #gluster
14:38 modaya joined #gluster
14:40 shaunm joined #gluster
14:41 modaya We had a problem with GlusterFS 3.7.11 in CentOS 7 where Gluster Mount point hangs repeatedly while trying to do IO intensive tasks
14:41 modaya After lots of troubleshooting and workarounds, we tried upgrading Linux kernel from 3.10.0-1 to 4.5.4-1
14:41 modaya and it worked
14:44 jobewan joined #gluster
14:48 shubhendu joined #gluster
14:49 shubhendu joined #gluster
14:50 EinstCrazy joined #gluster
14:52 ctrianta joined #gluster
14:57 partner ok, did some tests with attach/detach stuff (ref weekly meeting)
15:00 dlambrig_ joined #gluster
15:00 partner ?? pastebin
15:00 partner darn i've forgot all the practicalities already.. :)
15:03 archit_ joined #gluster
15:03 partner this is for LVM: https://paste.fedoraprojec​t.org/370731/88577146/raw/ - there is absolutely nothing funky there
15:05 partner this is for GlusterFS via libgfapi and the difference to above is quite visible: https://paste.fedoraprojec​t.org/370734/18870814/raw/
15:05 partner post-factum: ^
15:06 ndevos @paste
15:06 glusterbot ndevos: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
15:06 partner thanks, i'm in rust :)
15:08 partner the problem this creates is that on a active environment where Heat stacks come and go there are lots of attachs/detach operations and eventually libvirtd runs out of memory and things will start to fall apart
15:09 ndevos partner: I expect that it is 'only' the qemu process for the VM, not libvirt itself?
15:10 partner that vmsize presents libvirtd size, i did not monitor qemu-kvm values at all but if someone needs more data i can provide
15:11 EinstCrazy joined #gluster
15:12 ndevos hmm, I dont know how libvirtd uses libgfapi, I was not even aware it uses it at all!
15:12 rafi joined #gluster
15:12 ndevos I thought it would configure qemu, and communicate with the qemu process...
15:13 karnan joined #gluster
15:13 JoeJulian I'm mostly sure livbirtd does not use libgfapi. Unless that changed very recently.
15:14 nthomas joined #gluster
15:14 partner yeah its not that daemon. basically one configures this: qemu_allowed_storage_drivers=gluster
15:15 pur__ joined #gluster
15:16 JoeJulian Oh, that's qemu
15:16 JoeJulian Sounds like a qemu bug.
15:16 ndevos nah, its a memleak in libgfapi :-(
15:17 JoeJulian Ah, bummer.
15:17 ndevos we've fixed several of them already, but the cleanup of the xlator stack is rather difficult to get right
15:17 ndevos more recent versions should leak less... not sure how much that is though
15:18 ndevos and nfs-ganesha is hit by it as well, when dynamically adding/removing exports (volumes or subdirs)
15:18 EinstCrazy joined #gluster
15:18 ndevos skoduri was tracking that a bit more, but she's not online anymore
15:18 * ndevos needs to leave as well now...
15:19 JoeJulian Gluster is 10 years old this week....
15:19 JoeJulian Goodnight ndevos.
15:19 partner me too, we'll try out with different versions tomorrow and will report back here our findings, today was busy doing some workarounds and metrics around the topic
15:19 EinstCrazy joined #gluster
15:24 andy-b joined #gluster
15:24 atinm joined #gluster
15:25 EinstCrazy joined #gluster
15:28 EinstCrazy joined #gluster
15:29 shubhendu_ joined #gluster
15:35 kpease joined #gluster
15:38 haomaiwang joined #gluster
15:39 ramky_ joined #gluster
15:39 partner have hard time leaving...
15:40 partner ERROR (NotFound): volume_id not found: 3f869181-8e8f-44c5-a862-cae11b020402 (HTTP 404) (Request-ID: req-b067b304-d91c-4c5f-90a5-ef2c1cea9840)
15:40 partner May 25 15:37:34 VmSize: 24847532 kB
15:40 partner volume is there BUT also this is in log: libvirtd[25813]: out of memory
15:44 hagarth joined #gluster
15:59 ctrianta joined #gluster
16:05 arcolife joined #gluster
16:06 chirino joined #gluster
16:11 gigzbyte joined #gluster
16:11 gigzbyte hi all!
16:11 partner took under 70 attempts to make it fail, that is not much.. ohwell, need to head home now ->
16:11 gigzbyte Can u help me please. Can i connect gluster client with ver 3.5 to gluster server with ver 3.2?
16:12 [o__o] joined #gluster
16:17 gigzbyte hello!
16:17 glusterbot gigzbyte: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:23 jlp1 joined #gluster
16:26 Siavash_ joined #gluster
16:26 DV_ joined #gluster
16:27 Siavash__ joined #gluster
16:29 ashiq joined #gluster
16:29 JoeJulian gigzbyte: I've never tried a 3.5 client with a 3.2 server. 3.4 was the farthest I went. It might work though. Are you getting an error? ,,(paste) your client log.
16:29 glusterbot gigzbyte: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
16:39 ramky__ joined #gluster
16:43 skoduri joined #gluster
16:44 level7 joined #gluster
16:49 rhartkopf joined #gluster
16:53 rhartkopf I'm trying to add a 4th replica to my 3-replica gluster setup right now, and getting an error
16:53 rhartkopf sudo gluster volume add-brick gv0 replica 4 newhost:/sharedfiles/brick0
16:54 Manikandan_ joined #gluster
16:54 rhartkopf volume add-brick: failed:
16:54 rhartkopf in the logs: 0-management: op_ctx modification failed
16:55 rhartkopf Has anyone seen this before? I'm running v3.4.5
16:55 JoeJulian You'll need to check all your glusterd logs to see which one's failing the modification and (hopefully) why.
16:55 JoeJulian And wow.. it's old-timey day here in #gluster. :D
17:00 Siavash__ joined #gluster
17:00 nathwill joined #gluster
17:03 spalai joined #gluster
17:06 rhartkopf Thanks @JoeJulian, I ran the command on the target machine and got the error I was looking for.
17:06 JoeJulian Ah, cool. (what was it?)
17:07 rhartkopf volume add-brick: failed: The brick prod6.card.cm:/sharedfiles/brick0 is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.
17:07 rhartkopf *facepalm*
17:08 JoeJulian Ah, right. I should have guessed that.
17:08 JoeJulian I blame it on lack of coffee.
17:23 spalai left #gluster
17:24 post-factum partner: no help for 3.6 from me :)
17:26 jiffin joined #gluster
17:30 JoeJulian Oh, 3.6? No wonder.
17:41 kovshenin joined #gluster
17:42 * post-factum is preparing himself to dive into 3.8 upgrade soon
17:42 post-factum i hope one could start with 3.8 clients
17:45 Manikandan joined #gluster
17:53 hagarth joined #gluster
18:07 skylar joined #gluster
18:12 techsenshi joined #gluster
18:18 jlp1 joined #gluster
18:21 hi11111 joined #gluster
18:27 partner 3.6 is "stable" ;)
18:28 JoeJulian Meh
18:28 skylar joined #gluster
18:28 partner but i'll play with the available versions
18:28 rafi joined #gluster
18:31 level7 joined #gluster
18:31 partner lets see the 3.6.9 first as its easiest
18:31 plarsen joined #gluster
18:32 JoeJulian I'm not sure how many of those memory leaks were backported, but it's worth a try.
18:32 partner i can immediately say no help there..
18:32 partner May 25 18:32:24 VmSize:   743964 kB
18:32 partner May 25 18:32:28 VmSize:  1393764 kB
18:33 partner there's the difference already with one round of attach/detach
18:34 partner i suppose our use cases differ once again from the mass, we use extensively heat and recreate stacks a lot which involves these actions quite much
18:35 partner not sure if the fuse had the same issues but we stopped using it as the mounts were with the nova systemd cgroup and restart of the service killed mounts... i know we could alter the "kill" but this seemed a better way
18:40 shyam joined #gluster
19:02 nthomas joined #gluster
19:14 deniszh joined #gluster
19:15 level7 joined #gluster
19:25 jgrimmett joined #gluster
19:25 jgrimmett hello all
19:27 jgrimmett i have gluster running and am trying to gauge read/write performance... it is being used to store kvm vm's, and we are using RDMA to SSD's... we are getting 860+MB/s write, and 1.6GB/s read... im not sure if thats supposed to be good in the gluster world or if it should be better
19:28 ira joined #gluster
19:30 Siavash___ joined #gluster
19:30 mpietersen joined #gluster
19:45 post-factum jgrimmett: generally, it is good if it is enough :)
19:45 JoeJulian +1
19:47 skoduri joined #gluster
19:47 hagarth joined #gluster
19:48 JoeJulian That does sound like the read/write performance of a single ssd.
19:56 andy_b joined #gluster
20:09 level7_ joined #gluster
20:10 andy-b joined #gluster
20:11 DV_ joined #gluster
20:16 HoloIRCUser2 joined #gluster
20:35 anil_ joined #gluster
21:09 haomaiwang joined #gluster
21:29 wushudoin joined #gluster
21:31 radius left #gluster
21:31 johnmilton joined #gluster
21:32 hagarth joined #gluster
21:41 wushudoin joined #gluster
22:02 jobewan joined #gluster
22:07 DV_ joined #gluster
22:21 Wizek__ joined #gluster
22:27 techsenshi joined #gluster
22:36 karnan joined #gluster
22:37 karnan joined #gluster
22:52 ahino joined #gluster
22:58 haomaiwang joined #gluster
23:00 radius joined #gluster
23:03 partner only need to attach the volume and libvirtd keeps growing, slowly with one single instance/volume but nevertheless. some sort of pattern of 10 mins right now and it jumps up 500 megs. https://paste.fedoraprojec​t.org/370922/64217279/raw/
23:11 partner this with 3.6.9, need to see into 3.7 series tomorrow, 2am so i'm out ->
23:34 nathwill joined #gluster
23:35 jobewan joined #gluster
23:43 beeradb joined #gluster
23:50 misc joined #gluster
23:56 hackman joined #gluster
23:57 dlambrig_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary