Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 F2Knight joined #gluster
00:04 dlambrig joined #gluster
00:21 lezo joined #gluster
00:21 tyler274 joined #gluster
00:23 dlambrig_ joined #gluster
00:29 dlambrig joined #gluster
00:37 PaulCuzner joined #gluster
00:43 ahino joined #gluster
00:43 dlambrig joined #gluster
00:57 dlambrig joined #gluster
01:06 dlambrig joined #gluster
01:17 DV joined #gluster
01:17 kramdoss_ joined #gluster
01:19 dlambrig joined #gluster
01:22 raghug joined #gluster
01:25 kminooie joined #gluster
01:26 kminooie hi everyone
01:27 kminooie I having a problem I can't find anything about it on line. I am trying to mount a gluster file system with autofs, but I get this when I try to go to that directory: "Too many levels of symbolic links"
01:28 harish joined #gluster
01:31 kminooie I don't have any link anywhere in the glusterfs or around that mount point. I can mount the glusterfs manually ( with mount) and everything works fine. I am also mounting the same glusterfs on other servers with autofs with no problem. it is just on this one server that I get this message. there is nothing in any of the system log files or the file that I am passing to glusterfs-client through "log-file" parameter.
01:31 kminooie anyone has any idea how to diagnose this issue?
01:32 kminooie JoeJulian: help
01:36 hchiramm joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:51 arif-ali joined #gluster
02:03 dlambrig_ joined #gluster
02:12 JoeJulian kminooie: usually that happens when a symbolic link points to itself.
02:17 kminooie hi JoeJulian thanks for responding. as I said there is no link anywhere in the filesystem or around the mount point. I can mount manually on the same mount point with no issue. it is just when I use autofs that I get that error
02:17 kminooie link => symlink
02:18 JoeJulian Then I guess autofs is creating a symlink.
02:22 RameshN joined #gluster
02:22 ndevos kminooie: you are probably hitting https://bugzilla.redhat.co​m/show_bug.cgi?id=1340936 , I noticed that just recently too
02:22 glusterbot Bug 1340936: unspecified, unspecified, ---, ndevos, MODIFIED , Automount fails because /sbin/mount.glusterfs does not accept the -s option
02:23 ndevos kminooie: it is easy to fix, add (and ignore) a -s option in the /sbin/mount.glusterfs script
02:24 kminooie yes exactly that is what I am seeing in the autofs debug ouput
02:24 kminooie : >> Illegal option -s
02:24 kminooie >> Usage: /sbin/mount.glusterfs <volumeserver>:<volumeid/volumeport> -o<options> <mountpoint>
02:25 ndevos kminooie: this might work: curl https://github.com/gluster/glusterfs/commit/c​8da5669a15ed6944cceb9d003789ff333754bff.patch | patch /sbin/mount.glusterfs
02:27 dlambrig joined #gluster
02:27 ndevos kminooie: after applying the patch, you may need to restart automount, it can cache the result/error of the mounting, a restart should clear that up
02:28 kminooie thanks I am trying it.
02:31 ndevos kminooie: what version of glusterfs are you running?
02:36 kminooie ndevos: 3.7.11 ( debian )
02:37 ndevos kminooie: ok, I'll get a backport to 3.7 done then
02:37 kminooie :) :) thanks ndevos it is working ( after applying your patch ) I need to bookmark this :D
02:38 ndevos kminooie: cool, thanks for testing!
02:38 * ndevos thought he's the only one doing automounting with Gluster
02:38 kminooie hell no, it is the best thing ever :D
02:39 JoeJulian Yeah, I've never quite understood why you would automount gluster.
02:39 kminooie thanks JoeJulian sorry I bothered you. you are always my last hope :D
02:40 ndevos with automount you can just browse the volumes, I have one for (document/mail) backups, pictures, yum repositories and installation media
02:41 ndevos automount makes it easier for other services (like nginx) to access a volume too, no need to configure systemd dependencies anywhere
02:41 kminooie I can tell you that in my case it is kinda of a left over habit from before when glusterfs client was way too clunky and nfs wouldn't recover by itself. but never the less it works like charm
02:42 kminooie also what ndevos said ( about nginx )
02:42 JoeJulian I suppose if there was an automount plugin that could retrieve the list of volumes from gluster it could have its uses.
02:43 kminooie ndevos: again thanks, u saved my a**
02:43 ndevos kminooie: you want your Tested-by credit in the 3.7 patch? send me your "Real Name <you@example.com>" id :)
02:43 JoeJulian but I've never had a problem with nginx accessing a volume that was mounted with fstab.
02:43 kminooie it is kaveh minooie, but don't worry about :D
02:43 ndevos JoeJulian: how about http://blog.nixpanic.net/2014/04/conf​iguring-autofs-for-glusterfs-35.html ?
02:43 glusterbot Title: Nixpanic's Blog: Configuring autofs for GlusterFS 3.5 (at blog.nixpanic.net)
02:44 kminooie thanks every one have a good evening
02:44 kramdoss_ joined #gluster
02:45 ndevos hmm, no email... but I assume this is him: https://github.com/kminooie
02:45 glusterbot Title: kminooie (kaveh minooie) · GitHub (at github.com)
03:25 karnan joined #gluster
03:26 burn_ joined #gluster
03:28 nishanth joined #gluster
03:38 RameshN joined #gluster
03:38 amye joined #gluster
03:42 c0dyhi11 joined #gluster
03:44 c0dyhi11 Red Hat Storage 3 Administration Guide
03:44 c0dyhi11 Hello, I'm looking for a little help. I've googled like crazy and searched the "Red Hat Storage 3 Admin Guide" and I can't find my answer.
03:45 nhayashi joined #gluster
03:45 c0dyhi11 Is there a command to display the replica pairs of each brick?
03:47 c0dyhi11 I have a 7x node (6x HHDs each) replicated/distributed cluster with replicas = 3 And I'd like to know which brick are "Replica Pairs"
03:55 aspandey joined #gluster
03:55 itisravi joined #gluster
04:00 gem joined #gluster
04:02 atinm joined #gluster
04:03 PaulCuzner joined #gluster
04:04 c0dyhi11_ joined #gluster
04:04 c0dyhi11_ joined #gluster
04:06 c0dyhi11 joined #gluster
04:09 JoeJulian ~brick order | c0dyhi11
04:09 glusterbot c0dyhi11: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
04:10 Lee1092 joined #gluster
04:11 c0dyhi11 JoeJulian: I got that when I created the volume. But when I added a new node to the volume (and 6x more bricks) and I kick off a rebalance... How do the replicas work?
04:11 shubhendu_ joined #gluster
04:11 kshlm joined #gluster
04:12 c0dyhi11 It looks like as of 3.6 you don't need to do the replace-brick start anymore to relocate bricks
04:13 c0dyhi11 I was trying to follow this: https://joejulian.name/blog/how-to-expand-g​lusterfs-replicated-clusters-by-one-server/
04:13 glusterbot Title: How to expand GlusterFS replicated clusters by one server (at joejulian.name)
04:13 c0dyhi11 But realized that things must have changed in 3.6
04:13 nbalacha joined #gluster
04:21 nehar joined #gluster
04:28 RameshN joined #gluster
04:33 c0dyhi11 joined #gluster
04:34 prasanth joined #gluster
04:36 c0dyhi11 Or should I still be shuffling around the bricks with (gluster volume replace-brick ...) before expanding the volume?
04:37 amye left #gluster
04:37 DV joined #gluster
04:42 c0dyhi11 And after I figure this out.., I need to figure out how to expand a volume that is using erasure coding instead of replicas...
04:42 c0dyhi11 So I've got that going for me.
04:42 c0dyhi11 Which is nice...
04:49 c0dyhi11_ joined #gluster
04:50 c0dyhi11_ joined #gluster
04:51 poornimag joined #gluster
05:01 gowtham joined #gluster
05:16 ndarshan joined #gluster
05:23 Apeksha joined #gluster
05:24 kotreshhr joined #gluster
05:29 gem joined #gluster
05:32 atinm joined #gluster
05:32 Manikandan joined #gluster
05:32 jiffin joined #gluster
05:34 hgowtham joined #gluster
05:41 msvbhat_ joined #gluster
05:45 RameshN joined #gluster
05:45 jiffin1 joined #gluster
05:45 kotreshhr left #gluster
05:48 Saravanakmr joined #gluster
05:48 ppai joined #gluster
05:49 jiffin joined #gluster
05:53 jiffin1 joined #gluster
05:54 overclk joined #gluster
06:00 satya4ever_ joined #gluster
06:11 PaulCuzner joined #gluster
06:11 aravindavk joined #gluster
06:12 ppai joined #gluster
06:15 hackman joined #gluster
06:16 kotreshhr joined #gluster
06:16 rafi joined #gluster
06:20 ashiq joined #gluster
06:20 harish joined #gluster
06:21 atinm joined #gluster
06:22 skoduri joined #gluster
06:24 Manikandan joined #gluster
06:25 jtux joined #gluster
06:26 msvbhat_ joined #gluster
06:28 kdhananjay joined #gluster
06:29 gvandeweyer Hi, We are seeing issues with gluster resulting in dmesg "info task <task> blocked for more than 120 seconds" I've found some bug reports, but the latest one should be resolved in 3.5. We're running 3.7.9.
06:30 gvandeweyer <task> can be anything (java, cp, perl), and it seems to correlate with high line-by-line writes to the gluster volume, while under some load.
06:31 gvandeweyer I've also found reference to disabling the hugepages, but since these boost performance of the VM's in our case, I'd rather not disable them. Could anybody explain if this issue with hugepages still exist? I'm on ubuntu host + kvm-guest.  gluster-bricks are on bare-metal machines in a distributed/replicated setup
06:34 jiffin1 joined #gluster
06:34 raghug joined #gluster
06:34 atalur joined #gluster
06:36 jiffin joined #gluster
06:40 natarej joined #gluster
06:42 pur__ joined #gluster
06:45 bb0x joined #gluster
06:45 msvbhat_ joined #gluster
06:57 nishanth joined #gluster
06:58 jtux joined #gluster
07:18 [Enrico] joined #gluster
07:31 Manikandan joined #gluster
07:45 rastar joined #gluster
07:46 hackman joined #gluster
07:49 DV joined #gluster
07:49 aravindavk joined #gluster
07:50 arif-ali joined #gluster
07:52 DV joined #gluster
08:00 deniszh joined #gluster
08:00 arcolife joined #gluster
08:10 nhayashi joined #gluster
08:12 Slashman joined #gluster
08:16 ivan_rossi joined #gluster
08:19 hchiramm joined #gluster
08:22 autostatic JoeJulian: Thanks for the advice. Upgrading to a higher version is not an option at the moment. So for now we restarted the brick.
08:25 anil joined #gluster
08:25 ivan_rossi left #gluster
08:28 harish_ joined #gluster
08:31 aravindavk joined #gluster
08:36 nhayashi joined #gluster
08:40 om joined #gluster
08:48 gem joined #gluster
08:50 om2 joined #gluster
08:55 nhayashi joined #gluster
08:56 kenansul- joined #gluster
09:02 kovshenin joined #gluster
09:07 ramky joined #gluster
09:09 nhayashi joined #gluster
09:12 rafi1 joined #gluster
09:14 wnlx joined #gluster
09:16 jri joined #gluster
09:16 jiffin joined #gluster
09:17 RameshN joined #gluster
09:17 rafi joined #gluster
09:19 itisravi joined #gluster
09:19 jiffin1 joined #gluster
09:22 ppai joined #gluster
09:30 muneerse joined #gluster
09:34 JesperA joined #gluster
09:38 lalatenduM joined #gluster
09:38 jiffin1 joined #gluster
09:40 lalatenduM joined #gluster
09:44 muneerse2 joined #gluster
09:46 jiffin1 joined #gluster
09:53 jiffin1 joined #gluster
09:56 kenansulayman joined #gluster
10:02 Chinorro_ joined #gluster
10:06 deniszh joined #gluster
10:13 arif-ali joined #gluster
10:16 RameshN joined #gluster
10:20 msvbhat_ joined #gluster
10:25 cliluw joined #gluster
10:30 hchiramm joined #gluster
10:46 atinm joined #gluster
10:53 skoduri joined #gluster
10:57 bb0x joined #gluster
10:58 bfoster joined #gluster
11:01 bb0x joined #gluster
11:03 raghug joined #gluster
11:13 skoduri_ joined #gluster
11:14 jiffin1 joined #gluster
11:16 johnmilton joined #gluster
11:25 hchiramm joined #gluster
11:25 ju5t joined #gluster
11:27 kenansulayman joined #gluster
11:28 jiffin1 joined #gluster
11:34 atinm joined #gluster
11:44 msvbhat_ joined #gluster
11:47 hchiramm joined #gluster
11:49 kenansul- joined #gluster
11:49 arif-ali joined #gluster
11:50 ira joined #gluster
11:51 dlambrig joined #gluster
11:51 robb_nl joined #gluster
11:53 bb0x joined #gluster
11:54 msvbhat_ joined #gluster
12:00 ju5t joined #gluster
12:02 chirino joined #gluster
12:12 kenansul| joined #gluster
12:22 msvbhat_ joined #gluster
12:28 ju5t joined #gluster
12:28 dlambrig joined #gluster
12:31 plarsen joined #gluster
12:36 kenansulayman joined #gluster
12:39 unclemarc joined #gluster
12:45 bb0x joined #gluster
13:02 skoduri joined #gluster
13:07 B21956 joined #gluster
13:11 ben453 joined #gluster
13:12 ben453 bb0x: I had the "Peer is not in cluster yet" issue as well and restarting the daemon fixed it for me
13:12 jiffin1 joined #gluster
13:12 nbalacha joined #gluster
13:13 bb0x ben453, the strange thing is that if I check on hosts it says connected
13:14 bb0x i was tiring to modify the ansible playbook to probe the nodes first then create module
13:17 robb_nl joined #gluster
13:18 ben453 I've found that it works if you probe them all and then wait until they're all in the "Peer in cluster" state. If they don't get into that state after some time, restart the gluster daemon and check the state again
13:18 kenansul- joined #gluster
13:19 dnunez joined #gluster
13:21 bb0x after a few seconds they were in the cluster, probably ansible module should have a timeout to wait a few seconds
13:23 jiffin1 joined #gluster
13:33 nbalacha joined #gluster
13:37 rwheeler joined #gluster
13:39 dnunez joined #gluster
13:43 PaulCuzner joined #gluster
13:44 msvbhat_ joined #gluster
13:45 ju5t joined #gluster
13:47 RicardoSSP joined #gluster
13:47 RicardoSSP joined #gluster
13:53 arif-ali joined #gluster
14:00 bb0x joined #gluster
14:02 ju5t joined #gluster
14:04 Apeksha joined #gluster
14:06 jiffin1 joined #gluster
14:12 dgandhi joined #gluster
14:14 ju5t joined #gluster
14:19 PaulCuzner joined #gluster
14:26 ju5t joined #gluster
14:27 Che-Anarch joined #gluster
14:31 archit_ joined #gluster
14:33 dnunez joined #gluster
14:37 kenansulayman joined #gluster
14:43 kpease joined #gluster
14:43 bb0x joined #gluster
14:47 kotreshhr left #gluster
14:50 c0dyhi11 joined #gluster
14:54 dnunez joined #gluster
14:54 ju5t joined #gluster
14:59 rafi1 joined #gluster
15:05 harish_ joined #gluster
15:08 wushudoin joined #gluster
15:12 ashiq joined #gluster
15:14 jbrooks joined #gluster
15:15 bb0x joined #gluster
15:28 crashmag joined #gluster
15:30 overclk joined #gluster
15:34 kenansul- joined #gluster
15:35 muneerse joined #gluster
15:38 skoduri joined #gluster
15:40 muneerse2 joined #gluster
15:55 Manikandan joined #gluster
15:55 Gambit15 joined #gluster
15:57 ashiq joined #gluster
15:57 jiffin joined #gluster
15:58 Gambit15 Hey all
15:59 Gambit15 I'm currently planning on building a KVM cluster with the VHDs stored on a large gluster volume distributed across all of the nodes. In each node, 1 disk for the hypervisor & 5 for gluster
16:00 Gambit15 I've got a total of 32 machines available, however that's actually far more than we immediately need, so I'd start with around 8.
16:01 Gambit15 Anyone got any tips, warnings or comments on things I need to keep an eye out for or consider?
16:02 Gambit15 Doing all of my research online, of course, however up-to-date comments from people with experience would be very welcome
16:06 arif-ali joined #gluster
16:07 jiffin1 joined #gluster
16:09 jiffin joined #gluster
16:13 kenansulayman joined #gluster
16:14 David_H_Smith joined #gluster
16:18 PaulCuzner joined #gluster
16:19 ashiq joined #gluster
16:20 atinm joined #gluster
16:22 PaulCuzner joined #gluster
16:23 PaulCuzner joined #gluster
16:23 kpease joined #gluster
16:25 Gambit15 joined #gluster
16:28 Gambit15 Nobody around?
16:29 jiffin1 joined #gluster
16:31 jiffin joined #gluster
16:34 JoeJulian Gambit15: Doing our jobs probably...
16:35 JoeJulian Gambit15: A couple of thoughts...
16:35 ansyeblya joined #gluster
16:35 * Gambit15 listening
16:35 post-factum JoeJulian: the only job on Friday is to grab some beer and be ready for Euro — 2016 start
16:36 shaunm joined #gluster
16:37 Manikandan joined #gluster
16:37 ansyeblya hi, im getting this:
16:37 ansyeblya http://pastebin.com/Edx611Lq
16:37 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:37 JoeJulian To get the performance, I like to raid0 enough disks to maximize throughput and iops. We have 30 disks per server, so we end up with 6 bricks of 4 + ssd journals.
16:37 jiffin1 joined #gluster
16:38 hackman joined #gluster
16:38 ansyeblya https://paste.fedoraproject.org/377151/76688146/
16:38 glusterbot Title: #377151 Fedora Project Pastebin (at paste.fedoraproject.org)
16:39 JoeJulian Avoid making your bricks too large. Originally, this company had 15 x 4TB raid 6 bricks which took over a week to self-heal. That's way too long to have degraded fault tolerance.
16:40 JoeJulian ansyeblya: Check the client log in /var/log/glusterfs/mnt-khd.log
16:41 ansyeblya https://paste.fedoraproject.org/377153/65576908/
16:41 glusterbot Title: #377153 Fedora Project Pastebin (at paste.fedoraproject.org)
16:41 jiffin joined #gluster
16:42 JoeJulian So what's that tell you?
16:42 pedahzur joined #gluster
16:42 Gnomethrower joined #gluster
16:43 jiffin1 joined #gluster
16:43 Gambit15 Hmm, so my original idea of creating one big volume with node replication akin to RAID 6 might not work so well then?
16:43 ansyeblya https://paste.fedoraproject.org/377154/55769881/
16:43 glusterbot Title: #377154 Fedora Project Pastebin (at paste.fedoraproject.org)
16:43 ansyeblya glusterfsd is on that weird 49k port
16:43 ansyeblya and the client has tcp connectivity with it
16:43 JoeJulian Gambit15: I like to do replica 3 and count on Gluster for fault tolerance.
16:43 Gambit15 Just a couple more details on the infra, the servers are all Dell R710s with 6 HDs & 4x1Gbit NICs.
16:43 JoeJulian glusterfsd should be.
16:43 post-factum ansyeblya: nope
16:43 JoeJulian @ports
16:43 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up. All ports must be reachable by both servers and clients. Additionally it will listen on 38465-38468/tcp for NFS. NFS also depends on rpcbind/portmap ports 111 and 2049.
16:44 ansyeblya nope what, never used it. can you tell me what to do?
16:44 post-factum ansyeblya: ^^ tcp/24007
16:44 ansyeblya also accessible
16:44 ansyeblya from client
16:44 ansyeblya they are actually on the same bridge, so whichever port you may call its gonna be accessible
16:44 post-factum ansyeblya: as you can see, it is not
16:45 Lee1092 joined #gluster
16:45 post-factum netstat -tunlp | grep glusterd, please
16:45 JoeJulian ansyeblya: "connection attempt failed (Connection refused)" means that the server, gluster1, refused the tcp connection to port 24007.
16:45 post-factum (netstat from server)
16:46 JoeJulian Odds are that you've either got iptables in the way, or glusterd isn't running.
16:46 Gambit15 JoeJulian, I was also considering skipping RAID on the HDs for each node, and rely on Gluster's replication, thereby avoiding having two levels of RAID. Although I wonder if that might cause a performance hit?
16:46 ansyeblya ok, but tcp form client to server on that port is successful
16:46 ansyeblya no firewall involved
16:47 ansyeblya https://paste.fedoraproject.org/377155/57721814/
16:47 glusterbot Title: #377155 Fedora Project Pastebin (at paste.fedoraproject.org)
16:47 post-factum Gambit15: each replica increases traffic and introduces additional latency
16:47 JoeJulian Gambit15: It's a balancing act, to be sure.
16:47 ansyeblya https://paste.fedoraproject.org/377156/14655772/
16:47 glusterbot Title: #377156 Fedora Project Pastebin (at paste.fedoraproject.org)
16:47 JoeJulian post-factum: s/increases traffic/increases write traffic and a little managment traffic/
16:48 post-factum JoeJulian: doesn't it increase reading traffic as well?
16:48 post-factum JoeJulian: AFAIK, reading is performe from all replicas, using the fastest one
16:48 post-factum *performed
16:48 JoeJulian No, it reads from one replica.
16:48 post-factum JoeJulian: hmmmm
16:48 ansyeblya server is under 3.5.2 deb 7.8, cleint is under wheezy unfortunately and from repos 3.0.5 was accessible
16:48 post-factum JoeJulian: how it is selected?
16:49 ansyeblya maybe that ?
16:49 Gambit15 JoeJulian, so if I understand you correctly, you stripe all of the disks on each node & then replicate the nodes?
16:49 ansyeblya but why such a weird debug in log then
16:49 jiffin1 joined #gluster
16:49 ansyeblya client = squeeze (misstyped)
16:49 JoeJulian post-factum: Check out cluster.read-hash-mode
16:49 post-factum JoeJulian: thanks, will do
16:50 Gambit15 I was hoping to be able to have volumes larger than each individual node, but replicated enough to avoid any single point of failure
16:50 JoeJulian Gambit15: Well I have a lot more disks per server (no disks on the client nodes, so they don't participate).
16:51 JoeJulian The volume will still be larger than any individual server's storage.
16:51 Gambit15 The end goal is a decentralized KVM cluster with the VHDs stored on Gluster. Each node is therefore both client & server
16:51 JoeJulian The single-file-max-size (without striping within gluster) will be your max brick size.
16:52 JoeJulian Gambit15: Yeah, I'm aware of "hyperconvergance" despite my annoyance with the term.
16:53 msvbhat_ joined #gluster
16:53 post-factum ansyeblya: what are client and server gluster versions?
16:54 JoeJulian If it was me, yeah, I'd probably do a 5 disk raid0, 1 SSD for the OS + the brick journal. That should be nice and fast. 10Gbit ethernet or infiniband to connect them up and I'd do replica 3 and add capacity 3 "nodes" at a time.
16:54 Gambit15 brick = each raw volume provided to gluster? So in the case of each node running a large stripe, 1 node = 1 brick?
16:55 JoeJulian @glossary
16:55 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
16:55 Gambit15 Yeah, just confirming
16:55 Gambit15 cheers
16:55 plarsen joined #gluster
16:56 post-factum Gambit15: running 1 large brick per node is not that flexible
16:56 post-factum Gambit15: imagine, you'd like to resize it, add/remove disks
16:56 * Gambit15 listening
16:56 post-factum Gambit15: how many nodes have you got?
16:57 Gambit15 I've got about 30 servers to play with, however current capacity only requires about 6 TBH. I was thinking of starting with 8 & then growing as needed
16:57 JoeJulian Replica 3... no problem. Remove the drives, create the new raid, create the filesystem, create the brick path, then start...force the volume and let self-heal handle the rest.
16:58 post-factum JoeJulian: …and the whole node is being healed…
16:58 JoeJulian yep
16:58 jiffin1 joined #gluster
16:58 Gambit15 I won't be adding or removing disks from the servers. The idea is to just fill each hypervisor with disks (6) & then create a large volume across the nodes.
16:58 JoeJulian 5x8 is still just 40TB. That'll heal in a day.
16:59 bb0x joined #gluster
16:59 * post-factum is suggesting raid1 per brick and circular replica 2
17:00 Gambit15 We've got a ton of 2TB drives lying around which I'll use, and then I'll replace them with 6TB disks if the need ever arises
17:00 Gambit15 post-factum "circular replica 2"?
17:00 JoeJulian Which is fine, but that's 133% of the cost per TB.
17:01 JoeJulian And you don't get any additional reliability out of it.
17:01 balacafalata joined #gluster
17:01 balacafalata-bil joined #gluster
17:02 JoeJulian Do the math. You actually get a tiny fraction less reliability.
17:02 Gambit15 Out of larger disks? I'm aware
17:02 Gambit15 More nodes = more redundancy
17:02 balacafalata joined #gluster
17:02 shubhendu_ joined #gluster
17:02 JoeJulian Out of raid1 x replica 2 vs raid0 x replica 3.
17:03 Gambit15 Ah, sure
17:03 post-factum but less write penalty
17:03 pedahzur Question about "exactness" of replicas when they are mirroring each other.  I went into /export/<mybrick> on each of my two replicas and did a 'du -sc *' I noticed on one directory, the byte counts one directory weren't the same (2418 bytes less out of 136GB). Is that not unusual, or do I have a serious problem somewhere?
17:04 pedahzur Oh, wait...maybe not bytes. I think that's K.
17:04 JoeJulian But that write penalty mostly shows up in network bandwidth.
17:04 Gambit15 Next Q, any suggestions on getting the most efficiency out of small reads & writes?
17:05 JoeJulian @php
17:05 glusterbot JoeJulian: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache
17:05 jiffin1 joined #gluster
17:05 Gambit15 Most of the VMs are simple webservers with small DBs
17:05 prasanth joined #gluster
17:05 post-factum Gambit15: generally, hosting DBs on network FS is troublesome
17:05 JoeJulian Well they'll be VMs reading from images so the self-check penalty won't really apply.
17:05 Gambit15 I'll be using lots of RAM caches, memcache, etc, of course
17:06 JoeJulian Actually, I host mysql innodb tables on gluster and have for a decade with no issue.
17:06 post-factum JoeJulian: /me tries to imagine that for our setup
17:06 post-factum probably, that is the matter of load
17:06 JoeJulian In fact, sharding innodb by naming files such that they reside on individual dht subvolumes actually made it faster.
17:07 JoeJulian And don't do file-per-table.
17:07 jiffin joined #gluster
17:07 post-factum why?
17:07 Gambit15 Most of these sites are relatively low traffic - a couple of thousand visits per day. Anything heavier than that will have a dedicated DB server on the main storage
17:07 JoeJulian Because then the majority of your reads will come from a single brick.
17:08 guhcampos joined #gluster
17:08 post-factum aye, i've read about read hash :)
17:08 post-factum JoeJulian: but wont sharding spread that load across bricks?
17:08 JoeJulian Oh, that kind of sharding. Yeah, probably.
17:09 JoeJulian I sharded by having multiple innodb files on individual dht bricks before gluster had sharding.
17:09 JoeJulian I haven't tested performance using gluster sharding.
17:10 Gambit15 post-factum, you mentioned "circular replica 2" earlier. What did you mean by that?
17:10 arif-ali joined #gluster
17:10 JoeJulian https://joejulian.name/blog/how-to-expand-g​lusterfs-replicated-clusters-by-one-server/
17:10 glusterbot Title: How to expand GlusterFS replicated clusters by one server (at joejulian.name)
17:10 post-factum Gambit15: lets say, you have distributed-replicated volume, each brick pair is replicated across different servers
17:10 JoeJulian I think he's referring to the bottom diagram.
17:11 post-factum yep
17:11 Gambit15 Up until now, I've only been reading up on docs & blogs. Will start configuring & testing things in real life later today
17:12 JoeJulian real life should wait 'till Monday.
17:12 kenansulayman joined #gluster
17:13 post-factum twss
17:14 Gambit15 post-factum, that's how I expected the normal replication to work
17:14 gem joined #gluster
17:15 post-factum usually, replication works according to how you have configured it
17:15 JoeJulian @brick order
17:15 glusterbot JoeJulian: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
17:16 Gambit15 It was the "circular" bit which passed me?
17:17 post-factum it is just the matter of ordering
17:17 Gambit15 Ah, hold on, looking at the daigram on the blog JoeJulian posted
17:17 Gambit15 Understood
17:20 Gambit15 How is caching handled? Is it?
17:20 post-factum https://www.gluster.org/pipermail/glu​ster-devel/2015-September/046611.html
17:20 glusterbot Title: [Gluster-devel] GlusterFS cache architecture (at www.gluster.org)
17:20 Gambit15 (I am looking around for reading material as well, just asking real people, too)
17:21 jiffin1 joined #gluster
17:22 msvbhat_ joined #gluster
17:25 jiffin joined #gluster
17:30 JoeJulian Sorry, I'm going to be afk the rest of the day. Conference calls most of it.
17:31 jiffin1 joined #gluster
17:33 kenansul- joined #gluster
17:35 jiffin1 joined #gluster
17:35 Gambit15 Yeah, think I have all I need for now - now to get my hands dirty! Cheers post-factum & JoeJulian
17:37 post-factum np
17:37 bb0x joined #gluster
17:39 telmich joined #gluster
17:39 telmich hello
17:39 glusterbot telmich: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:39 telmich is there a good reason for glusterfsd to take 32 GiB of ram?
17:40 post-factum telmich: no
17:40 telmich (out of 32 GiB available on the system)
17:40 telmich Sorry, it was "glusterfs", not "glusterfsd"
17:40 telmich root      4720  2.2  4.4 2726852 1454748 ?     Ssl   2015 6700:10 /usr/sbin/glusterfs --volfile-server=192.168.0.2 --volfile-server=127.0.0.1 --volfile-id=/ri-gluster-volume-1 /mnt/gluster
17:41 telmich ah, no
17:41 telmich (sorry, it was a lot of sun today)
17:41 telmich root      4652  2.1  1.9 34171092 651680 ?     Ssl   2015 6576:23 /usr/sbin/glusterfsd -s 192.168.0.1 --volfile-id ri-gluster-volume-1.192.168.0.1.home-gluster -p /var/lib/glusterd/vols/ri-gluster-volu​me-1/run/192.168.0.1-home-gluster.pid -S /var/run/7403b17025eefe0a07709b1bd500feed.socket --brick-name /home/gluster -l /var/log/glusterfs/bricks/home-gluster.log --xlator-option *-posix.glusterd-uuid=5a1c8c7​d-d41a-4f33-b35c-e57343e7f43a --brick-port 49152 -
17:41 nishanth joined #gluster
17:42 jiffin1 joined #gluster
17:48 post-factum yeah, too much VIRT
17:49 telmich I've had to kill it, as it was stopping everything else on the system, as basically no memory was free - do you have any recommendations on what to do, when it happens again?
17:50 Intensity joined #gluster
17:53 jiffin1 joined #gluster
17:54 daMaestro joined #gluster
17:55 jiffin joined #gluster
17:57 kenansul- joined #gluster
17:57 nage joined #gluster
17:59 jiffin1 joined #gluster
18:08 post-factum take a statedump first
18:09 post-factum kill -s USR1 <pid_of_glusterfsd>
18:09 post-factum then go to /var/run/gluster, upload statedump somewhere and send a letter to gluster-users ML
18:09 post-factum i have similar issue, and hope it is being investigated now
18:14 ninkotech__ joined #gluster
18:19 AppStore joined #gluster
18:21 c0dyhi11 joined #gluster
18:22 msvbhat_ joined #gluster
18:38 c0dyhi11 joined #gluster
18:40 Gambit15 post-factum, with all of the other overheads considered, do you see much difference in performance between 7.2k, 10k & 15k drives?
18:41 c0dyhi11 joined #gluster
18:50 bb0x joined #gluster
18:52 PaulCuzner joined #gluster
18:55 c0dyhi11 joined #gluster
18:58 c0dyhi11 joined #gluster
18:58 post-factum Gambit15: unfortunately, never used >7.2k drives much
19:00 Gambit15 So safe to assume that 7.2ks won't make much of a hit then
19:00 Gambit15 Cool
19:03 jiffin joined #gluster
19:05 guhcampos joined #gluster
19:05 c0dyhi11 joined #gluster
19:06 jiffin1 joined #gluster
19:12 jiffin joined #gluster
19:15 jiffin1 joined #gluster
19:16 msvbhat_ joined #gluster
19:18 jiffin joined #gluster
19:20 jiffin1 joined #gluster
19:22 arif-ali joined #gluster
19:24 jiffin joined #gluster
19:26 kotreshhr joined #gluster
19:26 kotreshhr left #gluster
19:26 jiffin1 joined #gluster
19:31 msvbhat_ joined #gluster
19:31 guhcampos joined #gluster
19:32 guhcampos joined #gluster
19:36 kenansulayman joined #gluster
19:39 bb0x joined #gluster
19:42 jiffin1 joined #gluster
19:44 ben453 joined #gluster
19:44 jiffin joined #gluster
19:51 bb0x joined #gluster
19:52 ashiq joined #gluster
19:52 PaulCuzner joined #gluster
19:59 jiffin1 joined #gluster
20:06 julim joined #gluster
20:07 arif-ali joined #gluster
20:09 arif-ali joined #gluster
20:23 shaunm joined #gluster
20:24 jiffin1 joined #gluster
20:29 johnmilton joined #gluster
20:32 guhcampos joined #gluster
20:40 kenansulayman joined #gluster
20:49 johnmilton joined #gluster
20:59 foster joined #gluster
21:02 kenansul- joined #gluster
21:24 kenansul- joined #gluster
21:59 c0dyhi11 joined #gluster
22:37 cliluw joined #gluster
23:03 kenansulayman joined #gluster
23:14 kenansul- joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary