Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 JoeJulian taved: yes
00:13 taved Thanks JoeJulian
00:58 ichthys joined #gluster
01:00 ichthys I am looking to deploy ovirt using glusterfs with replicated glusterfs volumes. Is there a "sizing guide" to know how many disks i will need to use. And also a good way to know when a disk dies so i can replace it.
01:08 shdeng joined #gluster
01:10 gyadav__ joined #gluster
01:17 bhakti joined #gluster
01:32 derjohn_mob joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:49 riyas joined #gluster
01:50 gyadav__ joined #gluster
02:02 atrius joined #gluster
02:31 om2_ joined #gluster
02:43 nirokato joined #gluster
02:46 plarsen joined #gluster
02:54 skoduri joined #gluster
03:17 kramdoss_ joined #gluster
03:32 nbalacha_ joined #gluster
03:36 prasanth joined #gluster
03:37 magrawal joined #gluster
03:47 Shu6h3ndu joined #gluster
03:49 itisravi joined #gluster
03:50 riyas joined #gluster
03:51 taved_ joined #gluster
04:03 atinm joined #gluster
04:25 kotreshhr joined #gluster
04:31 masber joined #gluster
04:39 kdhananjay joined #gluster
04:41 sanoj joined #gluster
04:41 ankitr joined #gluster
04:43 buvanesh_kumar joined #gluster
04:45 kramdoss_ joined #gluster
04:46 ankitr joined #gluster
04:52 Humble joined #gluster
05:02 Karan joined #gluster
05:10 ndarshan joined #gluster
05:12 rafi joined #gluster
05:14 aravindavk joined #gluster
05:17 karthik_us joined #gluster
05:19 skumar joined #gluster
05:21 ppai joined #gluster
05:29 Prasad joined #gluster
05:30 skoduri joined #gluster
05:44 baber joined #gluster
05:53 sona joined #gluster
05:55 Saravanakmr joined #gluster
05:57 itisravi joined #gluster
06:00 prasanth joined #gluster
06:04 msvbhat joined #gluster
06:07 amarts joined #gluster
06:11 apandey joined #gluster
06:14 om2_ joined #gluster
06:19 msvbhat_ joined #gluster
06:19 Peppard joined #gluster
06:19 om2_ joined #gluster
06:19 nbalacha_ joined #gluster
06:19 skumar joined #gluster
06:19 aravindavk joined #gluster
06:19 skoduri joined #gluster
06:20 rafi joined #gluster
06:21 riyas joined #gluster
06:24 gyadav joined #gluster
06:29 primusinterpares joined #gluster
06:31 karthik_us joined #gluster
06:32 Karan joined #gluster
06:34 Peppard joined #gluster
06:40 ankitr joined #gluster
06:45 jiffin joined #gluster
06:51 ivan_rossi joined #gluster
06:52 ivan_rossi left #gluster
06:55 hgowtham joined #gluster
06:57 ashiq joined #gluster
06:59 masber joined #gluster
07:01 itisravi_ joined #gluster
07:03 amarts joined #gluster
07:15 kotreshhr joined #gluster
07:20 amarts joined #gluster
07:26 atinm joined #gluster
07:34 fsimonce joined #gluster
07:46 derjohn_mob joined #gluster
07:52 jkroon joined #gluster
07:57 apandey_ joined #gluster
07:58 apandey_ joined #gluster
08:03 ahino joined #gluster
08:04 kotreshhr joined #gluster
08:04 Peppard joined #gluster
08:34 itisravi joined #gluster
08:38 _KaszpiR_ joined #gluster
08:39 kramdoss_ joined #gluster
08:51 ayaz joined #gluster
08:57 percevalbot joined #gluster
08:58 poornima_ joined #gluster
09:15 chawlanikhil24 joined #gluster
09:31 jkroon joined #gluster
09:40 Wizek__ joined #gluster
09:46 kramdoss_ joined #gluster
10:03 flying joined #gluster
10:06 msvbhat joined #gluster
10:20 msvbhat joined #gluster
10:32 karthik_us joined #gluster
10:35 apandey joined #gluster
10:44 apandey joined #gluster
10:55 karthik_us joined #gluster
11:11 kramdoss_ joined #gluster
11:13 Dogethrower joined #gluster
11:41 saintpablo joined #gluster
11:56 Dogethrower joined #gluster
12:10 btspce joined #gluster
12:13 armyriad joined #gluster
12:15 btspce Seeking advice disk/brick configuration on a 3-4 node kvm cluster with gluster. Each server has 4x4TB sata drives. Currently running JBOD with distributed-replicated replica 2 and we need more speed
12:17 btspce switch to mdraid 10, one big brick per server and replica 3 on three servers?
12:18 Asako joined #gluster
12:18 amarts joined #gluster
12:23 d4n13L define "more speed", what is the network connection between these nodes?
12:29 _KaszpiR_ joined #gluster
12:31 kotreshhr left #gluster
12:52 ahino1 joined #gluster
12:52 nbalacha_ joined #gluster
13:04 gnulnx joined #gluster
13:04 gnulnx Wondering if anyone is using inotify to monitor files added to gluster mount points, and any roadblocks or cavaets they may have hit?
13:06 devyani7 joined #gluster
13:06 btspce joined #gluster
13:08 btspce at the moment we use bonded 1GbE nics
13:12 d4n13L what is your current local write speed on JBOD?
13:13 btspce # dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct 1073741824 bytes (1.1 GB) copied, 9.96009 s, 108 MB/s
13:14 btspce on two bonded nics (LACP)
13:15 d4n13L that's local and not on a replicated gluster?
13:16 d4n13L if yes, raid10 would probably give you 2x-3x as much, considering you have a good raid controller
13:16 d4n13L and then your NIC would be the bottleneck
13:17 d4n13L so in order to get better speed, you'd need to upgrade both, NIC and RAID
13:17 taved joined #gluster
13:23 baber joined #gluster
13:33 skylar joined #gluster
13:36 gyadav joined #gluster
13:37 alvinstarr joined #gluster
13:37 btspce that was from one of the virtual machines. These servers do not use any hardware raid at the moment so im thinking of going mdraid 10 with sharding and replica 3
13:42 ppai joined #gluster
13:42 shaunm joined #gluster
13:56 btspce what disk/brick configuration do you run for kvm hosts ? (want examples) sharding? mdraid? nic config?
13:57 ankitr joined #gluster
14:02 amarts joined #gluster
14:23 plarsen joined #gluster
14:25 ahino joined #gluster
14:28 itisravi joined #gluster
14:45 nbalacha_ joined #gluster
14:51 aravindavk joined #gluster
14:53 zoja joined #gluster
14:59 glisignoli joined #gluster
15:04 wushudoin joined #gluster
15:09 cloph_away joined #gluster
15:15 farhorizon joined #gluster
15:19 bmurt joined #gluster
15:19 Somedream joined #gluster
15:23 kramdoss_ joined #gluster
15:25 billputer joined #gluster
15:26 cholcombe joined #gluster
15:33 [fre] joined #gluster
15:33 [fre] Hi all.
15:33 btspce joined #gluster
15:33 [fre] Could someone tell where I should ask questions about RH glusterfs-installs?
15:34 [fre] I'm building volumes from "scratch", say manually. Yet, I'd like to know if they need some kind of "structure", specific mount-points, shares, lock-directories and so on.
15:36 plarsen joined #gluster
15:43 shutupsquare joined #gluster
15:49 zakharovvi[m] joined #gluster
15:59 skoduri joined #gluster
16:13 budric[m] joined #gluster
16:14 vbellur joined #gluster
16:30 Gambit15 joined #gluster
16:36 nbalacha_ joined #gluster
16:42 riyas joined #gluster
16:46 farhorizon joined #gluster
16:51 ankitr joined #gluster
16:58 gyadav joined #gluster
17:03 msvbhat joined #gluster
17:08 PotatoGim joined #gluster
17:12 amarts joined #gluster
17:14 sona joined #gluster
17:37 nh2 joined #gluster
17:37 nh2 hi, does it make sense that the glusterd systemd unit declares itself as `Before=network-online.target`?
17:37 nh2 https://github.com/gluster/glusterfs/blob/master/extras/systemd/glusterd.service.in#L5
17:37 glusterbot Title: glusterfs/glusterd.service.in at master · gluster/glusterfs · GitHub (at github.com)
17:41 nh2 it is my understanding that the meaning of `network-online.target` is to signal when a network connection is available -- I'm not sure it makes a lot of sense to require glusterd to start before that
17:45 ahino joined #gluster
17:47 lighttrr joined #gluster
17:56 ashiq joined #gluster
17:58 bmurt joined #gluster
18:00 Karan joined #gluster
18:03 Peppard hi... is it possible to disable sticky-pointers? thanks!
18:03 Peppard (i know what they are for, but they interfere badly with other tools, in my case snapraid, and i think i probably don't need them performance-wise)
18:10 gyadav joined #gluster
18:14 gyadav joined #gluster
18:17 jiffin joined #gluster
18:17 baber joined #gluster
18:23 mlg9000 joined #gluster
18:24 gyadav joined #gluster
18:38 RustyB joined #gluster
18:39 mlg9000 joined #gluster
18:41 jkroon joined #gluster
18:43 gyadav joined #gluster
18:47 msvbhat joined #gluster
18:58 _KaszpiR_ hey, looks like glustefs 3.8.11 when invoking gluster peer proble <dns> says node is alreay in the pool
18:58 _KaszpiR_ but it returns only ip of the host, not updating with hostname
18:59 _KaszpiR_ hm I think it was not like that before
19:03 icey joined #gluster
19:07 _KaszpiR_ the issue is caused by the fact that puppet module (voxpopuli/puppet-gluster) tries to add peers via hostnames but seems like gluster is not reproting back all the ip/peer list via facter
19:10 _KaszpiR_ ah facter grabs peer status |grep Hostname, totally ignoring 'Other names:
19:11 vbellur joined #gluster
19:13 n-st joined #gluster
19:16 guhcampos joined #gluster
19:21 jkroon joined #gluster
19:24 baber joined #gluster
19:31 jkroon joined #gluster
19:36 farhorizon joined #gluster
19:43 shaunm joined #gluster
19:43 derjohn_mob joined #gluster
20:14 baber joined #gluster
20:14 scones joined #gluster
20:27 farhoriz_ joined #gluster
20:49 niknakpaddywak joined #gluster
20:58 shyam joined #gluster
21:01 nh2 joined #gluster
21:02 baber joined #gluster
21:05 mlg9000 joined #gluster
21:07 scones Anyone here have any experience good or bad running Gluster in AWS they could share?
21:36 mlg9000 joined #gluster
22:09 Vapez joined #gluster
22:40 bmurt joined #gluster
22:50 derjohn_mob joined #gluster
23:03 farhorizon joined #gluster
23:13 nh2 joined #gluster
23:15 bmurt joined #gluster
23:23 btspce joined #gluster
23:23 btspce please help. Im getting Status: Transport endpoint is not connected
23:23 btspce all bricks are up
23:25 JoeJulian btspce: firewall? iptables? selinux?
23:26 JoeJulian _KaszpiR_: gluster got smarter. Re-probing to set the hostname is no longer critical.
23:26 JoeJulian Peppard: yes, don't create a distribute volume. Otherwise, no.
23:27 btspce firewalld...
23:28 JoeJulian btspce: I thought that gluster was _supposed_ to be able to add itself to firewalld, at least the 3.10 version.
23:28 JoeJulian If that's not working, it's a bug.
23:28 btspce this is 3.8
23:28 JoeJulian Ah, not so sure about 3.8.
23:28 JoeJulian left #gluster
23:29 JoeJulian joined #gluster
23:29 * JoeJulian grumbles
23:29 JoeJulian Damn I hate when alt-tab doesn't actually change apps and ctrl-w closes #gluster.
23:30 btspce @Joe Can I ask for advice on brick config ?
23:33 btspce We are running 4 nodes with 4x4TB SATA each in a JBOD config. Gluster is Dist-Repl 8x2=16. 2x1GbE lacp for gluster. These servers are kvm hosts with lots of random IO. What is the best way to maximize performance with this hardware?
23:34 btspce we also want quorom
23:35 btspce mdraid 10, 1 brick on each host replica 3 with sharding ?
23:35 JoeJulian I'm guessing email?
23:35 JoeJulian I do raid 0 if I need throughput. replica 3
23:35 JoeJulian sharding, yes.
23:36 JoeJulian Now I'm thinking vm images.
23:36 btspce yes gluster is only hosting vm images
23:36 JoeJulian imho, raid 0 replica 3 sharded.
23:37 btspce 20GB-1,5TB each image
23:37 btspce and let one disk failure kill one node?
23:39 masber joined #gluster
23:39 JoeJulian Sure, you've got 2 more. Self-heal should be roughly 2 days on a very full volume, and with a normal MTBF of most single servers, that'll still give you 6 nines.
23:41 btspce but with this hardware you think I should skip distributed bricks? and go with replica 3 or 4 (in case of 4 nodes)
23:43 nh2 gluster system:: execute georep-sshkey.py node-generate .
23:43 nh2 Unable to end. Error : Success
23:43 nh2 what does this mean?
23:43 nh2 I have no idea what it's trying to tell me
23:43 btspce nice error (or success?) :)
23:44 JoeJulian No, I wouldn't go past replica 3
23:45 nh2 btspce: the exit code is 1, and this is also what `gluster-georep-sshkey generate` fails with
23:45 JoeJulian nh2: It looks like the error means successfully was unable to end
23:45 JoeJulian no f'ing clue. lol.
23:45 nh2 the only thing I could find is https://www.spinics.net/lists/gluster-users/msg30820.html which suggests selinux as a culprit but I don't even have that installed
23:45 glusterbot Title: Re: Gluster geo-replication failure — Gluster Users (at www.spinics.net)
23:46 nh2 JoeJulian: I can't even find the string in the gluster source code that prints this error
23:46 btspce so partition the stripes to get replica 3 on 4 nodes?
23:46 nh2 ➤ git grep "Unable to end" v3.10.1
23:46 nh2 v3.10.1:geo-replication/src/peer_mountbroker.in:    "Unable to end. Error : Bad file descriptor" error,
23:46 nh2 is the only occurrence, and a different error message
23:47 JoeJulian btspce: As you add distribute subvolumes, you actually spread the risk of any one image being destroyed, increasing the statistical stability wrt to any one file.
23:48 JoeJulian The odds of *any* file being lost remain the same, but the probability of any *one* file being destroyed goes down significantly.
23:48 JoeJulian It's weird.
23:51 JoeJulian nh2: https://github.com/gluster/glusterfs/blob/ff5e91a60887d22934fcb5f8a15dd36019d6e09a/xlators/mgmt/glusterd/src/glusterd-geo-rep.c#L5225
23:51 glusterbot Title: glusterfs/glusterd-geo-rep.c at ff5e91a60887d22934fcb5f8a15dd36019d6e09a · gluster/glusterfs · GitHub (at github.com)
23:53 JoeJulian nh2: So I think that means that whatever it called exited with, "Success" but threw a non-zero return code.
23:55 nh2 JoeJulian: I guess this is where random line length limitations make debugging difficult. Thanks for finding the source
23:55 btspce JoeJulian: thanks!
23:57 JoeJulian nh2: yeah, I've been annoyed with that for years.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary