Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-05-11

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:02 julim joined #gluster
00:28 mowntan joined #gluster
00:40 mowntan joined #gluster
00:43 jobewan joined #gluster
00:46 haomaiwang joined #gluster
00:54 bennyturns joined #gluster
01:18 EinstCrazy joined #gluster
01:31 haomaiwang joined #gluster
01:34 harish joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 ashiq joined #gluster
01:56 haomaiwang joined #gluster
01:57 muneerse joined #gluster
01:59 natarej joined #gluster
02:01 muneerse2 joined #gluster
02:01 haomaiwang joined #gluster
02:05 Lee1092 joined #gluster
02:08 sakshi joined #gluster
02:14 haomaiwang joined #gluster
02:23 nishanth joined #gluster
02:29 overclk joined #gluster
02:30 dgandhi joined #gluster
02:31 EinstCrazy joined #gluster
02:32 dgandhi joined #gluster
02:34 dgandhi joined #gluster
02:36 dgandhi joined #gluster
02:37 dgandhi joined #gluster
02:39 dgandhi joined #gluster
02:40 dgandhi joined #gluster
02:48 EinstCrazy joined #gluster
02:49 nhayashi joined #gluster
02:51 ashiq joined #gluster
02:52 atalur joined #gluster
03:05 haomaiwang joined #gluster
03:06 hagarth joined #gluster
03:07 EinstCrazy joined #gluster
03:20 RameshN joined #gluster
03:21 luizcpg joined #gluster
03:24 EinstCra_ joined #gluster
03:34 sakshi joined #gluster
03:36 raghug joined #gluster
03:40 haomaiwang joined #gluster
03:41 nehar joined #gluster
03:42 Dasiel_ joined #gluster
03:42 nbalacha joined #gluster
03:52 itisravi joined #gluster
03:53 gem joined #gluster
03:53 haomaiwang joined #gluster
04:01 haomaiwang joined #gluster
04:09 shubhendu joined #gluster
04:13 shubhendu joined #gluster
04:18 haomaiwang joined #gluster
04:18 atinm joined #gluster
04:23 _ndevos joined #gluster
04:40 prasanth joined #gluster
04:52 Gnomethrower joined #gluster
04:53 rafi joined #gluster
04:56 hchiramm joined #gluster
04:56 Wizek joined #gluster
05:02 skoduri joined #gluster
05:05 nishanth joined #gluster
05:08 Gnomethrower joined #gluster
05:08 ndarshan joined #gluster
05:09 level7 joined #gluster
05:12 ashiq joined #gluster
05:14 jiffin joined #gluster
05:22 Manikandan joined #gluster
05:28 hgowtham joined #gluster
05:34 aspandey joined #gluster
05:35 poornimag joined #gluster
05:35 ramky joined #gluster
05:36 Apeksha joined #gluster
05:38 kshlm joined #gluster
05:38 karthik___ joined #gluster
05:52 haomaiwang joined #gluster
06:01 haomaiwang joined #gluster
06:02 kdhananjay joined #gluster
06:04 haomaiwang joined #gluster
06:10 spalai joined #gluster
06:15 Bhaskarakiran joined #gluster
06:17 karnan joined #gluster
06:23 pur joined #gluster
06:26 Saravanakmr joined #gluster
06:29 skoduri joined #gluster
06:32 jtux joined #gluster
06:34 ppai joined #gluster
06:36 mchangir joined #gluster
06:39 k4n0 joined #gluster
06:44 dgandhi joined #gluster
06:46 dgandhi joined #gluster
06:48 dgandhi joined #gluster
06:49 dgandhi joined #gluster
06:51 dgandhi joined #gluster
06:51 arcolife joined #gluster
06:51 dgandhi joined #gluster
06:52 anil joined #gluster
06:53 dgandhi joined #gluster
06:57 armyriad joined #gluster
06:58 atinm joined #gluster
06:59 nishanth joined #gluster
06:59 haomaiwang joined #gluster
07:01 nbalacha joined #gluster
07:01 haomaiwang joined #gluster
07:01 aravindavk joined #gluster
07:02 armyriad joined #gluster
07:07 deniszh joined #gluster
07:07 kovshenin joined #gluster
07:11 haomaiwang joined #gluster
07:13 mowntan joined #gluster
07:13 TvL2386 joined #gluster
07:18 ctria joined #gluster
07:20 Manikandan joined #gluster
07:24 fsimonce joined #gluster
07:25 haomaiwang joined #gluster
07:25 Siavash joined #gluster
07:25 Siavash joined #gluster
07:25 [Enrico] joined #gluster
07:25 Siavash Hi guys
07:26 Siavash I have cluster of 3 nodes.
07:26 Siavash node2 and node3 show all other peers in connected status.
07:26 Siavash but node1 is showing node2 in disconnected status.
07:27 Siavash How can I debug this?
07:29 robb_nl joined #gluster
07:39 ctria joined #gluster
07:44 MikeLupe joined #gluster
07:47 atalur joined #gluster
07:49 karthik___ joined #gluster
07:55 JesperA- joined #gluster
08:01 np_ joined #gluster
08:01 haomaiwang joined #gluster
08:10 Manikandan_ joined #gluster
08:11 atinm joined #gluster
08:14 level7 joined #gluster
08:14 nbalacha joined #gluster
08:25 JesperA joined #gluster
08:36 Slashman joined #gluster
08:38 jiffin joined #gluster
08:39 jiffin1 joined #gluster
08:40 robb_nl joined #gluster
08:43 raghug joined #gluster
08:49 jiffin joined #gluster
08:49 kshlm joined #gluster
08:49 post-factum Siavash: check you firewall first. then, check logs
08:49 post-factum *your
08:51 legreffier hello :) I'm having a difficult time with gluster since 3.7.11 on centos6. It fails to start glusterfsd , it just exist with return 6.
08:52 legreffier it's failing some test in the init script. I don't see what's going on.
08:54 post-factum legreffier: should you even try to start glusterfsd manually?
08:54 dresantos joined #gluster
08:54 arcolife joined #gluster
08:55 ndk_ joined #gluster
08:57 jgjorgji joined #gluster
08:58 jgjorgji i was wondering what normal performance penalty for gluster is? i have a 4 node replicated cluster and i'm getting about 3x slowdown on large file transfers
08:58 jgjorgji disk and network io are well beyond maximum on all nodes
09:01 haomaiwang joined #gluster
09:03 legreffier post-factum: it does restart right after logrotate :/
09:04 legreffier http://pastie.org/10832359
09:04 glusterbot Title: #10832359 - Pastie (at pastie.org)
09:04 legreffier here's my logrotate.conf
09:04 post-factum jgjorgji: with replica 4 you have 4x slowdown because of 4x traffic
09:04 post-factum jgjorgji: i believe you do not need replica 4
09:05 legreffier from what i get , glusterd takes care of starting glusterfsd upon its restart
09:05 legreffier is it hos it's supposed to be ?
09:05 legreffier how*
09:05 raghug joined #gluster
09:06 post-factum legreffier: why glusterd is restarted?
09:06 post-factum legreffier: you should do "/usr/bin/killall -HUP glusterd > /dev/null 2>&1 || true" as well
09:06 post-factum legreffier: noone should restart critical service on rotating logs
09:09 legreffier from what i get those service aren't critical to gluster sanity , it just handle the command to the gluster cluster
09:09 jgjorgji post-factum: but the network is maxing ~220mbit/s while the transfer is running and it's a 1gbit network
09:09 post-factum jgjorgji: it is all about latency
09:10 post-factum jgjorgji: each write op takes some time, and that time grows with replica count growing, multiplying by rtt
09:11 legreffier it's a slightly modified conf from gluster official package (i just changed rotation period and log path)
09:13 post-factum legreffier: hmmm i doubt. looking at my rotate conf from 3.7.11, see no restart there
09:13 post-factum legreffier: but i'm on centos 7
09:13 post-factum legreffier: probably, you should contact packager
09:14 post-factum ndevos: ^^ are you packager for el6?
09:14 jgjorgji post-factum: would it be a good idea to have 2 replicated bricks on the same server or just put regular raid1 under the bricks?
09:14 [Enrico] joined #gluster
09:14 post-factum jgjorgji: there is no reason to load replica within 1 host onto gluster. raid1 is fine, and we do raid1 as well
09:15 post-factum jgjorgji: you need replica for between hosts only
09:15 poornimag joined #gluster
09:16 jgjorgji i tried both configurations as i was curious about the speed and it's about the same
09:18 post-factum jgjorgji: try to measure iopses on random load as well, and you'd be surprised
09:19 jgjorgji i'm also curious about some test setups, should i just load up a test client on each workstation and try or is there something to automate that?
09:24 arcolife joined #gluster
09:24 [Enrico] joined #gluster
09:25 post-factum jgjorgji: you would want to simulate real workload first
09:36 RameshN joined #gluster
09:43 jiffin1 joined #gluster
09:47 haomaiwang joined #gluster
09:52 gem joined #gluster
09:55 aravindavk joined #gluster
10:01 paul98 joined #gluster
10:01 haomaiwang joined #gluster
10:02 paul98 hi, i been using open filer with a iscsi target for a windows exchange server, i've move to gluster, setup a iscsi target and then run samba for the mapping, does this work the same?
10:06 Wizek_ joined #gluster
10:12 aspandey joined #gluster
10:17 natarej joined #gluster
10:24 haomaiwa_ joined #gluster
10:32 aspandey joined #gluster
10:35 haomaiwang joined #gluster
10:40 Debloper joined #gluster
10:41 ndevos legreffier: from what repository do you get the glusterfs packages?
10:48 hackman joined #gluster
10:49 haomaiwang joined #gluster
10:57 claude_ joined #gluster
10:57 dgandhi joined #gluster
10:59 dgandhi joined #gluster
10:59 lmkone joined #gluster
11:00 dgandhi joined #gluster
11:01 dgandhi joined #gluster
11:01 lmkone hello all - is it ok if two of the servers i have setup replica between listen on two different ports for specific volume? on one i have got 49156 while the other 49152?
11:03 dgandhi joined #gluster
11:04 dgandhi joined #gluster
11:06 dgandhi joined #gluster
11:09 kshlm lmkone, That is fine
11:12 lmkone kshlm, even in failover scenario ?
11:13 kshlm Yup.
11:13 kshlm glusterfs clients connect to both clients always.
11:13 kshlm Reads are done from the client that repsonds fastest
11:13 kshlm s/client/server/
11:14 glusterbot What kshlm meant to say was: Reads are done from the server that repsonds fastest
11:14 kshlm writes are sent to both
11:14 kshlm If the server that was being read from goes offline, reads switch to the available server
11:14 level7 joined #gluster
11:14 johnmilton joined #gluster
11:15 lmkone get it - thanks kshlm
11:15 kshlm and writes happen on the remaining server, with some extra metadata to signify that the write was done only on 1 server and needs to be healed when the other comes back.
11:15 lmkone i meant got it
11:15 kshlm :)
11:17 aspandey joined #gluster
11:19 Gnomethrower joined #gluster
11:20 lmkone what kind of write and read speeds should i expect from a decent servers with 1gb connection?
11:25 robb_nl joined #gluster
11:26 KenM joined #gluster
11:27 jiffin joined #gluster
11:36 KenM hello, i am new to gluster and have heard small file performance is bad, so i am trying to setup a small test for myself using AWS (2 servers, 1 client).  Using the docs, 3x m4.large , latest Amazon Linux AMI 2016.03 and  gluster I created a stripe volume across two server nodes.  dd tests are fine unless I drop below 128k block-size for a count of 10k.  When I do that, invariably the client process hangs with a "deadly embrace".
11:36 KenM Namely, an strace, lsof, or anything else touching a gluster resource on that client hangs until I reboot
11:37 KenM if i instead create a volume against just one node, the dd test runs fine regardless of how low the block-size is set
11:37 jiffin KenM: stripe is not well supported or maintained
11:37 jiffin i mean strip volume
11:37 KenM ahh, what do most people use?
11:38 jiffin KenM: u can try out sharded volume (similar to stripe)
11:38 jiffin KenM: do u have any specific reason to stripe the files at the backend
11:38 jiffin ?
11:38 TvL2386 joined #gluster
11:39 KenM ignorance I guess ;)  I just thought that's what people do when setting up multi-server gluster
11:40 KenM i'll look into other configs, but is there a specific recommendation?  I was trying to stay away from "replicated" at first while I get a feel for how "slow" small-file performance is
11:42 jiffin KenM: IMO usually people tries 1x2 (simple replicated) volume for the consistency and availability of data
11:43 jiffin KenM: anyway it depends on your case
11:43 jiffin s/case/use case
11:44 KenM that makes sense.  In my case we're trying to find an AWS "shared file solution" that was as fast as possible, ignoring HA , but thanks!  That helps me get past this immediate show-stopper.
11:48 kshlm KenM, If you don't want HA, just use plain glusterfs volumes.
11:49 kshlm Plain volumes only aggreate the available storage and don't do any replication
11:51 KenM kshlm: so you're saying  'gluster volume create' with only brick arguments (e.g. no stripe, replicate, etc)?
11:51 kshlm Yup.
11:51 kshlm That creates a plain distribute volume.
11:52 KenM kshlm: Doh!  I'm such a goof!  I made it more complicated than it is...thanks!
11:54 nehar joined #gluster
11:55 JonathanD joined #gluster
11:57 JonathanD joined #gluster
11:59 karthik___ joined #gluster
12:01 ppai joined #gluster
12:04 EinstCrazy joined #gluster
12:04 KenM kshlm:,jiffin: that solved the problem, thanks!
12:04 KenM Have a great day!
12:06 kshlm Weekly Gluster community meeting starting now in #gluster-meeting
12:32 EinstCrazy joined #gluster
12:33 jgjorgji joined #gluster
12:33 level7 joined #gluster
12:36 mbukatov joined #gluster
12:36 ppai joined #gluster
12:40 EinstCrazy joined #gluster
12:44 shubhendu joined #gluster
12:48 nishanth joined #gluster
12:49 unclemarc joined #gluster
12:50 kdhananjay joined #gluster
12:55 mbukatov joined #gluster
12:58 [Enrico] joined #gluster
12:59 hi11111 joined #gluster
12:59 shubhendu joined #gluster
13:01 d0nn1e joined #gluster
13:07 bennyturns joined #gluster
13:08 B21956 joined #gluster
13:08 nishanth joined #gluster
13:08 spalai left #gluster
13:16 amye joined #gluster
13:16 ndevos @later tell post-factum that he gets a PM from glusterbot
13:16 glusterbot ndevos: The operation succeeded.
13:16 post-factum ndevos: i'm always connected via znc :D
13:17 post-factum ndevos: hm, worked
13:17 ndevos post-factum: yeah, I'm more wondering when glusterbot passes the message on
13:17 post-factum ndevos: i guess, when i send first message after tell op
13:17 post-factum @later tell ndevos hello
13:17 glusterbot post-factum: The operation succeeded.
13:20 plarsen joined #gluster
13:24 Gnomethrower joined #gluster
13:27 jiffin1 joined #gluster
13:28 paul98_ joined #gluster
13:35 level7_ joined #gluster
13:41 gem joined #gluster
13:44 B21956 joined #gluster
13:46 jobewan joined #gluster
13:47 shyam joined #gluster
13:50 andy-b joined #gluster
13:51 ramky joined #gluster
13:55 poornimag joined #gluster
13:56 paul98 joined #gluster
13:58 shyam1 joined #gluster
13:59 nbalacha joined #gluster
14:00 Gnomethrower joined #gluster
14:02 aravindavk joined #gluster
14:03 spalai joined #gluster
14:04 haomaiwang joined #gluster
14:05 [diablo] joined #gluster
14:06 [diablo] Good afternoon, guys does anyone  know if there is a way to reset the web UI access password for admin on RHGS 3.1 please?
14:06 [diablo] seems we've lost it ;-)
14:09 [diablo] oh he's found it
14:09 pg joined #gluster
14:13 itisravi joined #gluster
14:21 skylar joined #gluster
14:22 paul98_ joined #gluster
14:26 Manikandan_ joined #gluster
14:26 skoduri joined #gluster
14:26 karnan joined #gluster
14:27 mpietersen joined #gluster
14:27 paul98 joined #gluster
14:29 mpietersen joined #gluster
14:39 Wizek_ joined #gluster
14:40 plarsen joined #gluster
14:43 kpease joined #gluster
14:43 m0zes joined #gluster
14:45 BitByteNybble110 joined #gluster
14:45 claude_ left #gluster
14:47 dlambrig_ joined #gluster
14:52 itisravi joined #gluster
14:53 nehar joined #gluster
14:58 haomaiwang joined #gluster
15:04 haomaiwang joined #gluster
15:05 kshlm joined #gluster
15:06 hybrid512 joined #gluster
15:08 level7 joined #gluster
15:15 itisravi joined #gluster
15:16 gem joined #gluster
15:18 nishanth joined #gluster
15:24 shubhendu joined #gluster
15:25 dlambrig_ joined #gluster
15:29 m0zes joined #gluster
15:33 amye joined #gluster
15:36 shyam joined #gluster
15:39 Gnomethrower joined #gluster
15:57 level7 joined #gluster
16:25 harish joined #gluster
16:29 spalai joined #gluster
16:29 Siavash joined #gluster
16:29 Siavash joined #gluster
16:32 aravindavk joined #gluster
16:35 bluenemo joined #gluster
16:41 dlambrig_ joined #gluster
16:41 jgjorgji joined #gluster
16:47 Gnomethrower joined #gluster
16:48 mchangir|afk joined #gluster
16:53 haomaiwang joined #gluster
16:56 jlockwood joined #gluster
16:59 jobewan joined #gluster
17:00 jobewan joined #gluster
17:03 Gnomethrower joined #gluster
17:07 Gnomethrower joined #gluster
17:08 spalai joined #gluster
17:13 atinm|afk joined #gluster
17:17 ivan_rossi left #gluster
17:19 spalai left #gluster
17:24 malevolent_ joined #gluster
17:24 xavih_ joined #gluster
17:24 deniszh joined #gluster
17:24 gem joined #gluster
17:41 DV__ joined #gluster
17:57 MikeLupe joined #gluster
17:59 deniszh joined #gluster
18:08 skylar joined #gluster
18:17 mowntan joined #gluster
18:34 natarej joined #gluster
18:35 Siavash joined #gluster
18:38 MikeLupe2 joined #gluster
18:50 robb_nl joined #gluster
18:58 spalai1 joined #gluster
18:58 dlambrig_ joined #gluster
19:03 MikeLupe joined #gluster
19:03 shubhendu joined #gluster
19:05 MikeLupe2 joined #gluster
19:09 MikeLupe joined #gluster
19:13 Slashman joined #gluster
19:14 level7_ joined #gluster
19:17 bennyturns joined #gluster
19:23 bennyturns joined #gluster
19:25 MikeLupe joined #gluster
19:25 kovshenin joined #gluster
19:29 gem joined #gluster
19:30 F2Knight joined #gluster
19:55 johnmilton joined #gluster
20:00 haomaiwang joined #gluster
20:12 johnmilton joined #gluster
20:36 Scotch joined #gluster
20:36 B21956 joined #gluster
20:41 jgjorgji joined #gluster
21:01 hybrid512 joined #gluster
21:29 deniszh joined #gluster
21:40 shyam joined #gluster
21:46 kovshenin joined #gluster
21:56 jockek joined #gluster
22:24 bluenemo joined #gluster
22:51 edong23 joined #gluster
23:12 john51_ joined #gluster
23:14 mbukatov joined #gluster
23:16 Acinonyx joined #gluster
23:16 yoavz joined #gluster
23:19 combatdud3 joined #gluster
23:20 combatdud3 Forgive the noob question. Does anyone have any resources on the total capacity for replicated volumes vs distributed replicated volumes, vs distributed volumes?
23:22 post-factum combatdud3: could you please be more specific?
23:24 combatdud3 Sorry. If I have 4 nodes each with 1 brick of 500GB. What would the capacities be in each setup. I haven't found anywhere that has explained this (in a way I understand).
23:25 combatdud3 Context is that I am evaluating ovirt + gluster for a hypervisor setup.
23:30 dlambrig_ joined #gluster
23:36 haomaiwang joined #gluster
23:37 Wojtek joined #gluster
23:38 necrogami joined #gluster
23:41 post-factum you could make distributed-replicated volume 2×2 with total capacity of 500×4/2=1TB
23:41 post-factum or pure distributed of total capacity 500×4=2TB
23:42 post-factum or replica 3 arbiter 1
23:43 post-factum in this case total capacity is 500GB, and 1 node is not used
23:43 post-factum it is up to you to decide and choose between reliability, capacity and performance
23:43 combatdud3 Cool. So distributed replicated is probably the best mix of reliability, capacity and performance?
23:44 post-factum yep
23:44 combatdud3 Awesome. Thanks for your help.
23:44 post-factum np
23:45 MugginsM joined #gluster
23:46 xavih joined #gluster
23:58 MugginsM joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary