Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:23 Peppard joined #gluster
00:26 badone__ joined #gluster
00:30 badone_ joined #gluster
00:33 adzmely joined #gluster
00:37 n-st joined #gluster
00:44 jdhiser joined #gluster
00:45 cornus_ammonis joined #gluster
00:49 jdhiser Guys, I'm having some problems with getting a previously started volume (that was cleanly stopped) to restart.  the CLI gives little info other than that the operation fails. Here's some  logs:  http://pastebin.com/zYwS8sxz
00:49 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
00:50 jdhiser there is no /var/log/etc-gluster*log file created.  here's the fpaste link instead, if you prefer.  http://fpaste.org/230571/14338973/
00:50 jdhiser (if that message seems out-of-no-where, the bot may have clobbered my first message.  let me know, I'll repeat it.)
00:51 jdhiser FYI, i just did a glusterd restart on each peer.
00:52 jdhiser glusterfs 3.4.2
00:52 premera joined #gluster
00:55 gildub joined #gluster
00:56 jdhiser Here is the glusterd -debug output: http://fpaste.org/230572/33897795/
00:59 tessier JoeJulian: I checked that tcp 24007 connects both ways, selinux is not enforcing.
00:59 tessier jdhiser: Wish I could help. I'm a gluster newbie myself.
01:00 jdhiser Thanks, tessier.
01:00 jdhiser i've been using it w/o problems for a few months.. my first issue :)
01:03 tessier What do you use it for?
01:04 jdhiser we've been standing up a small cluster (40 nodes) to do research at a university for big data based automatic program repair
01:04 jdhiser LinkedIn was nice enough to donate an entire rack with blades that have 12 dual-core CPUs, 48gb ram, 2.1tb disk
01:06 tessier wow
01:06 jdhiser OH!  I figured it out!
01:06 jdhiser They were doing a photoshoot in the room and decided that since gluster was safe, they'd go ahead and hot swap some drives *annoyed*
01:06 jdhiser and the machine decided to remount the drive to a different letter.. so the mounting of one of the bricks failed.
01:07 tessier congrats!
01:07 jdhiser It all works pretty great.. exept the error reporting is... well, leaves something to be desired.
01:09 tessier I'm a little concerned about performance. I'm planning to use it to host VM images for KVM.
01:09 jbautista- joined #gluster
01:09 jdhiser we are using openstack for that, and it pushes images around quite nicely.
01:09 jdhiser we did have some performance issues at first, but it was about reading lots of small files.
01:10 jdhiser big files actually were OK.  I undersatnd the gluster native client is better for that.
01:10 tessier I'm getting 28MB/s throughput on a straight streaming write. That seems way slow.
01:10 jdhiser on a 100mbit connection?  or a gb connection?
01:10 tessier gigabit
01:10 jdhiser Yes, that's slow.
01:10 tessier Just writing to one SATA disk but still, that should do 70MB/s
01:10 jdhiser agreed.
01:12 tessier I read on JoeJulian's blog that gluster is already pretty well optimized for the generic use case. I'm using 9000 MTU. Not sure what else I should do.
01:12 jdhiser is that just one (virtual) machine writing on an otherwise quiet switch?
01:13 jdhiser are you using the native client or NFS?
01:14 jdhiser I had to turn on some performance tuning options to get better performance with smaller files.  I'm not sure if any would help with larger files, but here's my set of performance optimization options if you care to investigate more: http://fpaste.org/230575/89884714/
01:15 nangthang joined #gluster
01:17 tessier jdhiser: I'm really not sure. I heard gluster uses nfs but I don't see any nfs processes running nor have I configured anything related to nfs.
01:17 tessier I mount the volume with: mount -t glusterfs 10.0.1.12:/disk07a /gluster/disk07a/
01:17 jdhiser that'd be the native client.
01:17 tessier I notice a lot of cpu time being used also. I think it is shoveling a lot of stuff through userspace.
01:17 tessier native client is faster, I presume?
01:17 jdhiser your mount command would have -t nfs if you were nfs mounting.
01:17 tessier ah, ok
01:18 jdhiser I've heard the native client is better with large files, but for my workload (lots of small files) performance was horrible.
01:18 jdhiser you might try an NFS mount just for shiggles.
01:18 jdhiser i found that fscache and cachefiles were important client side caching optimiations.
01:19 jdhiser i think you can just try it quickly by changing glusterfs to nfs in your mount command.
01:24 cholcombe joined #gluster
01:26 jbautista- joined #gluster
01:31 jdhiser Good luck tessier, I'm off.
01:45 DV__ joined #gluster
02:00 Hamcube2 joined #gluster
02:01 Hamcube2 I've been working on a problem over the last few days where my cluster is down. All glusterd servers stop as soon as there is more than 1 node on the network.
02:01 Hamcube2 This all happened after I had added a node and attempted a rebalance operation
02:02 Hamcube2 from then, all glusterd servers hang, and all cli commands hang as well. I can get one node up at a time and be responsive, but any more than 1 node and it goes down
02:02 Hamcube2 Any thoughts as to what's going on? Perhaps I should just blow all the file attrs and the /var/lib/glusterd directory away and make a new volume entirely, then migrate the data into that?
02:03 Hamcube2 That will take some time...
02:03 bharata-rao joined #gluster
02:03 Hamcube2 I'd like to repair it if possible as opposed to wiping the slate clean and glossing over what triggered it in the first place.
02:06 Hamcube2 Hm, DNS is currently wonky in that if we get an NXDOM, it reidrects to one of the servers in the cluster. Don't ask why, it just does :) I wonder if that could be causing problems?
02:08 DV joined #gluster
02:35 Hamcube2 /var/lib/glusterd/vols/VOLNAME/node_state.info shows different rebalance_status, status, rebalance_op and rebalance_id values
02:35 Hamcube2 depending on which node you're looking at
02:36 Hamcube2 I renamed the node_state.info file and now we're back up and running again
02:37 Hamcube2 Yup, we're back in business. Took me about 4 days to work that out
02:37 Hamcube2 Now, what happened to cause the crashes, and why did deleting that file resolve it?
02:41 side_control joined #gluster
02:43 shubhendu__ joined #gluster
02:48 Hamcube2 Looks like attempting the rebalance kills it
02:48 Hamcube2 Able to reproduce. I'll file a bug.
02:48 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
02:54 julim joined #gluster
02:57 Peppaq joined #gluster
03:01 RameshN joined #gluster
03:01 Hamcube2 Looks exactly like this one : https://bugzilla.redhat.com/show_bug.cgi?id=1227677
03:01 glusterbot Bug 1227677: high, unspecified, ---, spalai, ASSIGNED , Glusterd crashes and cannot start after rebalance
03:02 kanagaraj joined #gluster
03:08 overclk joined #gluster
03:19 rafi joined #gluster
03:21 DV joined #gluster
03:24 Gill joined #gluster
03:26 RameshN joined #gluster
03:39 dusmant joined #gluster
03:39 soumya joined #gluster
03:41 rafi1 joined #gluster
03:44 RameshN joined #gluster
03:57 TheSeven joined #gluster
04:00 Jandre joined #gluster
04:02 itisravi joined #gluster
04:08 paulc_AndChat joined #gluster
04:17 ppai joined #gluster
04:19 cornus_ammonis joined #gluster
04:24 shubhendu__ joined #gluster
04:26 bharata_ joined #gluster
04:27 kanagaraj joined #gluster
04:27 atinmu joined #gluster
04:28 nbalacha joined #gluster
04:41 jiffin joined #gluster
04:41 smohan joined #gluster
04:47 ramteid joined #gluster
04:50 zeittunnel joined #gluster
04:50 Alex joined #gluster
04:54 gem joined #gluster
04:58 yazhini joined #gluster
05:01 sakshi joined #gluster
05:02 deepakcs joined #gluster
05:04 dusmant joined #gluster
05:05 anil joined #gluster
05:06 rafi joined #gluster
05:11 mjrosenb joined #gluster
05:13 hgowtham joined #gluster
05:15 ashiq joined #gluster
05:16 maveric_amitc_ joined #gluster
05:17 itisravi left #gluster
05:18 mcpierce joined #gluster
05:25 karnan joined #gluster
05:27 schandra joined #gluster
05:30 Manikandan joined #gluster
05:31 DV joined #gluster
05:33 spandit joined #gluster
05:33 rgustafs joined #gluster
05:34 bennyturns joined #gluster
05:46 pppp joined #gluster
05:47 smohan joined #gluster
05:48 Bhaskarakiran joined #gluster
05:49 zeittunnel joined #gluster
05:52 soumya joined #gluster
05:59 kshlm joined #gluster
06:02 edong23 joined #gluster
06:05 saurabh joined #gluster
06:06 vimal joined #gluster
06:12 soumya_ joined #gluster
06:13 anrao joined #gluster
06:15 hagarth joined #gluster
06:19 smohan joined #gluster
06:20 dusmant joined #gluster
06:20 shubhendu__ joined #gluster
06:25 atalur joined #gluster
06:32 poornimag joined #gluster
06:33 glusterbot News from newglusterbugs: [Bug 1230007] [Backup]: 'New' as well as 'Modify' entry getting recorded for a newly created hardlink <https://bugzilla.redhat.com/show_bug.cgi?id=1230007>
06:37 DV joined #gluster
06:38 nbalachandran_ joined #gluster
06:41 arao joined #gluster
06:42 kdhananjay joined #gluster
06:43 dusmant joined #gluster
06:43 shubhendu__ joined #gluster
06:43 * deepakcs tried gluster on ppc64le arch system, basic distribute volume works!
06:47 kdhananjay joined #gluster
06:53 al joined #gluster
07:01 [Enrico] joined #gluster
07:02 nsoffer joined #gluster
07:03 glusterbot News from newglusterbugs: [Bug 1230017] [Backup]: 'Glusterfind list' should display an appropriate output when there are no active sessions <https://bugzilla.redhat.com/show_bug.cgi?id=1230017>
07:04 Philambdo joined #gluster
07:12 atrius` joined #gluster
07:13 nbalachandran_ joined #gluster
07:29 hagarth joined #gluster
07:34 glusterbot News from newglusterbugs: [Bug 1230026] BVT: glusterd crashed and dumped during upgrade (on rhel7.1 server) <https://bugzilla.redhat.com/show_bug.cgi?id=1230026>
07:47 smohan joined #gluster
07:52 hagarth joined #gluster
07:54 soumya_ joined #gluster
07:55 c0m0 joined #gluster
07:56 Slashman joined #gluster
07:56 haomaiwa_ joined #gluster
07:58 LebedevRI joined #gluster
07:59 [Enrico] joined #gluster
07:59 aravindavk joined #gluster
08:07 liquidat joined #gluster
08:18 jcastill1 joined #gluster
08:18 kotreshhr joined #gluster
08:23 jcastillo joined #gluster
08:27 Trefex joined #gluster
08:27 Trefex joined #gluster
08:31 arcolife joined #gluster
08:33 jcastill1 joined #gluster
08:33 rafi1 joined #gluster
08:37 autoditac joined #gluster
08:38 rgustafs joined #gluster
08:38 jcastillo joined #gluster
08:48 dusmant joined #gluster
08:50 arao joined #gluster
08:51 anrao joined #gluster
09:03 nsoffer joined #gluster
09:06 [Enrico] joined #gluster
09:09 raghu` joined #gluster
09:11 spalai joined #gluster
09:23 shubhendu__ joined #gluster
09:26 rafi joined #gluster
09:28 badone_ joined #gluster
09:36 gvandeweyer joined #gluster
09:37 gvandeweyer Hi, in the mailinglists, I found that it should be possible to assign priority/weights to certain bricks in release 3.6+ (I have 3.6.2)
09:38 gvandeweyer is there documentation around somewhere?  We have a big storage node which is being fully congested (iostat shows 100% %util), while other smaller nodes are only used about 25%. I'd like to rebalance this situation by trying to shift brick/node priority
09:43 karnan joined #gluster
09:43 shubhendu__ joined #gluster
09:47 ira joined #gluster
09:55 arcolife joined #gluster
10:02 dusmant joined #gluster
10:02 kaushal_ joined #gluster
10:07 vovcia joined #gluster
10:07 vovcia hi o/
10:07 vovcia do You know of any selinux support on gluster?
10:09 kshlm joined #gluster
10:14 ndevos vovcia: selinux on the client-side or on the server-side?
10:14 ndevos actually, Fedora can run Gluster on the server in enforcing for all I know
10:15 ndevos and client-side should be able to store the selinux xattrs on the fuse-mounts, but you need to mount with the "selinux" mount option
10:15 nsoffer joined #gluster
10:18 eljrax JoeJulian: jeez, whatever I google to do with glusterfs, I end up on your blog :) Just wanted to say massive thanks for your effort, very good stuff! Shame you've gone a bit quiet :/
10:19 arao joined #gluster
10:27 gvandeweyer We have a big storage node which is being fully congested (iostat shows 100% %util), while other smaller nodes are only used about 25%. I'd like to rebalance this situation by trying to shift brick/node priority. Is that possible?
10:29 arao joined #gluster
10:39 vovcia ndevos: client side - i have Operation not supported on chcon on fuse mount
10:40 arcolife joined #gluster
10:41 ndevos vovcia: did you mount with the "selinux" option?
10:42 arao joined #gluster
10:42 vovcia ndevos: yes
10:43 vovcia centos 7, glusterfs 3.7.1, kernel 4.0.4
10:43 side_control joined #gluster
10:44 vovcia i think FUSE dont have selinux support
10:44 ndevos vovcia: FUSE only passes the extended attributes on, it is the Linux-VFS that handles the selinux logic...
10:45 ndevos vovcia: do you see selinux errors on the bricks?
10:45 cornusammonis joined #gluster
10:46 ndevos I thought mounting with "selinux" was the only thing that was needed, but maybe something changed?
10:46 vovcia hmm maybe i will try with permissive
10:51 Staples84 joined #gluster
10:53 Pupeno joined #gluster
10:53 Pupeno joined #gluster
10:55 firemanxbr joined #gluster
10:59 hgowtham joined #gluster
11:02 rafi1 joined #gluster
11:08 vovcia ndevos: kernel 4.0.5, selinux permissive, chcon failed to change context operation not supported
11:08 vovcia ndevos: fuse mount.
11:10 arcolife joined #gluster
11:14 glusterbot News from resolvedglusterbugs: [Bug 1214168] While running i/o's from cifs mount huge logging errors related to quick_read performance xlator : invalid argument:iobuf <https://bugzilla.redhat.com/show_bug.cgi?id=1214168>
11:18 cyberbootje joined #gluster
11:19 atalur joined #gluster
11:22 ernetas joined #gluster
11:22 ernetas Hey guys.
11:25 rafi joined #gluster
11:31 spot joined #gluster
11:31 spot joined #gluster
11:34 nishanth joined #gluster
11:39 premera joined #gluster
11:46 rajesh joined #gluster
11:52 julim joined #gluster
11:57 vovcia hmm it seems there is no way to support selinux on gluster
11:57 vovcia neither with fuse or nfs mount
11:58 zeittunnel joined #gluster
11:59 B21956 joined #gluster
12:01 jdarcy joined #gluster
12:05 glusterbot News from newglusterbugs: [Bug 1230169] afr: dereference of a null pointer <https://bugzilla.redhat.com/show_bug.cgi?id=1230169>
12:11 arao joined #gluster
12:11 itisravi joined #gluster
12:13 Jandre joined #gluster
12:17 poornimag joined #gluster
12:17 arcolife joined #gluster
12:20 ppai joined #gluster
12:23 eljrax What's the overhead of volume profiling?
12:23 eljrax Is it one of those things best avoided on a live volume?
12:29 rwheeler joined #gluster
12:31 vimal joined #gluster
12:32 jiffin joined #gluster
12:34 tanuck joined #gluster
12:43 Gill joined #gluster
12:43 msvbhat eljrax: I don't think volume profile adds any overhead
12:44 msvbhat eljrax: You *should* be able to run profiling without any issues on a live volume
12:46 eljrax Like the arse-covering emphasis there ;) Thanks!
12:49 arao joined #gluster
12:52 arcolife joined #gluster
12:54 NuxRo hi, is there a way to tell "gluster volume status" to output results in a more script friendly format? I'd like an easy way to find out when a brick or nfs process is not Y
12:56 hagarth NuxRo: volume status --xml ?
12:57 NuxRo oh, had no idea, help page does not show it (v3.4 btw)
12:57 NuxRo what other switches are available?
12:57 hagarth NuxRo: that's the only one I can think of .. json would be nice too
12:58 NuxRo thanks :)
13:02 plarsen joined #gluster
13:04 m0zes joined #gluster
13:05 m0zes joined #gluster
13:06 m0zes joined #gluster
13:06 ppai joined #gluster
13:08 klaxa|work joined #gluster
13:15 shubhendu__ joined #gluster
13:15 rafi joined #gluster
13:15 pppp joined #gluster
13:18 dusmant joined #gluster
13:20 tanuck joined #gluster
13:23 nbalachandran_ joined #gluster
13:24 dgandhi joined #gluster
13:24 tanuck joined #gluster
13:25 tanuck joined #gluster
13:27 coredump joined #gluster
13:28 hagarth joined #gluster
13:28 coredump joined #gluster
13:29 smohan left #gluster
13:29 aaronott joined #gluster
13:30 coredump joined #gluster
13:33 Trefex joined #gluster
13:33 anti[Enrico] joined #gluster
13:38 T0aD joined #gluster
13:41 atalur joined #gluster
13:45 [Enrico] joined #gluster
13:45 glusterbot News from resolvedglusterbugs: [Bug 1218863] `ls' on a directory which has files with mismatching gfid's does not list anything <https://bugzilla.redhat.com/show_bug.cgi?id=1218863>
13:50 msciciel_ joined #gluster
13:55 [Enrico] joined #gluster
13:57 rgustafs joined #gluster
14:01 hagarth joined #gluster
14:06 rjoseph joined #gluster
14:06 harish joined #gluster
14:07 kotreshhr left #gluster
14:11 rgustafs joined #gluster
14:13 coredump joined #gluster
14:14 coredumb joined #gluster
14:14 coredumb hellp
14:14 coredumb hello*
14:19 coredumb I was using gluster before 3.7 and the new arbiter disk option
14:20 coredumb and I was adding more peers to my 2 servers 2 bricks replica to prevent split brains
14:20 coredumb is that still valid ?
14:30 ndevos vovcia: NFSv3 does not have extended attributes (or "labelled NFS"), so it can not support SElinux
14:31 ndevos vovcia: but, if selinux on fuse fails, you should file a bug for that, I was under the impression it should work and was tested
14:31 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
14:31 lpabon joined #gluster
14:36 eljrax So I've got a bit of a problem. Apparently there's a known bug where glusterfs crashes when going to or from replicated to replicated and distributed. So now I've got a 2x2 volume, with one brick I've had to reset. I can't re-probe, becuase the other bricks thinks it's already there. The uuid has changed of the original brick, and I can't remove the crashed brick because the replica count is then wrong
14:36 eljrax So I think I'm a bit stuck?
14:36 arcolife joined #gluster
14:40 theron joined #gluster
14:41 coredump joined #gluster
14:48 georgeh-LT2 joined #gluster
15:02 jiffin joined #gluster
15:05 kshlm joined #gluster
15:12 julim joined #gluster
15:16 pppp joined #gluster
15:19 JoeJulian eljrax: Thanks for the kudos. Unfortunately the stuff I've been doing at work can't be talked about yet and the stuff I've done off-work hasn't been very interesting. Hopefully that will change in a couple months.
15:21 JoeJulian coredumb: sure it's still a valid way to do it.
15:21 coredumb JoeJulian: ok
15:21 coredumb good to know :)=
15:22 ju5t joined #gluster
15:22 coredumb JoeJulian: what's the arbiter disk size requirements ?
15:22 JoeJulian Unless I'm completely wrong, I believe it's none.
15:23 JoeJulian eljrax: Is there a bug report for that?
15:23 ju5t hello, we have servers running debian squeeze and it seems that there are no glusterfs client packages available there, do we have any other options besides backporting from wheezy for example?
15:25 eljrax JoeJulian: https://bugzilla.redhat.com/show_bug.cgi?id=1228093  it's that one I think
15:25 glusterbot Bug 1228093: unspecified, unspecified, ---, spalai, POST , Glusterd crash
15:25 JoeJulian ju5t: iirc, someone tried to do that but there were libraries missing from that release and it became a huge chore.
15:27 ju5t JoeJulian: oh, that doesn't sound like much fun. We're apparently running into some memory leaks with the NFS clients we're using now and I would much rather use the GlusterFS client instead. We'll try to find a workaround.
15:29 maveric_amitc_ joined #gluster
15:30 RameshN joined #gluster
15:32 anrao joined #gluster
15:33 coredumb JoeJulian: ok
15:37 JoeJulian eljrax: Ah, ok, so that's the rebalance bug. :( I haven't even seen a workaround for that yet, just the code fix. If the volume definitions are mismatched, you can rsync /var/lib/glusterd/vols from the good server to the bad one. If the volume-id has changed, you can change it on the bricks with setfattr.
15:38 coredump joined #gluster
15:38 JoeJulian That's not going to fix the rebalance though. You're hosed on that for about another week until the next 3.7 is released.
15:40 eljrax Luckily this isn't in production yet, downgrading to 3.7.0 just to get it working for now
15:40 eljrax And will go to 3.7.2 next week
15:40 JoeJulian cool
15:40 JoeJulian Let me know if that works. 3.7.0 scares me.
15:41 R0ok_ joined #gluster
15:49 eljrax I wasn't scared of 3.7.0 until you just said that :)
15:58 shubhendu__ joined #gluster
15:59 rafi1 joined #gluster
15:59 papamoose joined #gluster
16:00 eljrax At least it didn't crash.. But it says the rebalance failed on 2/4 nodes
16:02 RameshN joined #gluster
16:03 eljrax Even though it's a 2x2 volume, all files were put on all bricks. Although newly created files are distributed as expected
16:06 JoeJulian Probably not all. Some of them are probably dht pointers, size 0 mode 1000
16:07 rotbeard joined #gluster
16:09 eljrax Right you are :)
16:18 jeffcraighead joined #gluster
16:20 jeffcraighead hey all. I'm attempting to install gluster 3.6.3 on Vivid using the PPA. However it looks like the installer didn't put in all the init scripts. I'm getting the following errors:
16:21 jeffcraighead Creating config file /etc/default/nfs-common with new version
16:21 jeffcraighead Adding system user `statd' (UID 108) ...
16:21 jeffcraighead Adding new user `statd' (UID 108) with group `nogroup' ...
16:21 jeffcraighead Not creating home directory `/var/lib/nfs'.
16:21 jeffcraighead invoke-rc.d: gssd.service doesn't exist but the upstart job does. Nothing to start or stop until a systemd or init job is present.
16:21 jeffcraighead invoke-rc.d: idmapd.service doesn't exist but the upstart job does. Nothing to start or stop until a systemd or init job is present.
16:21 jeffcraighead nfs-utils.service is a disabled or a static unit, not starting it.
16:21 jeffcraighead Setting up glusterfs-server (3.6.3-ubuntu1~vivid1) ...
16:21 jeffcraighead invoke-rc.d: glusterfs-server.service doesn't exist but the upstart job does. Nothing to start or stop until a systemd or init job is present.
16:21 jeffcraighead at the end of the install and glusterd isn't running
16:21 jeffcraighead executing service gluster start gives the following: root@www-2:~# service gluster status
16:21 jeffcraighead ● gluster.service
16:21 jeffcraighead Loaded: not-found (Reason: No such file or directory)
16:21 jeffcraighead Active: inactive (dead)
16:22 jeffcraighead any suggestions?
16:28 gem joined #gluster
16:32 cholcombe joined #gluster
16:33 bennyturns joined #gluster
16:33 vovcia jeffcraighead: maybe try glusterd.service
16:33 jeffcraighead tried that as well
16:34 jeffcraighead root@www-2:/etc/init.d# service glusterd start
16:34 jeffcraighead Failed to start glusterd.service: Unit glusterd.service failed to load: No such file or directory.
16:42 vovcia seems like problem with vivid
16:44 jiffin1 joined #gluster
16:45 RameshN joined #gluster
16:45 n-st joined #gluster
16:45 jiffin1 joined #gluster
16:52 soumya_ joined #gluster
16:54 jeffcraighead left #gluster
16:56 rafi joined #gluster
16:59 rafi joined #gluster
17:06 Rapture joined #gluster
17:10 papamoose joined #gluster
17:16 glusterbot News from resolvedglusterbugs: [Bug 1222750] non-root geo-replication session goes to faulty state, when the session is started <https://bugzilla.redhat.com/show_bug.cgi?id=1222750>
17:16 glusterbot News from resolvedglusterbugs: [Bug 1223741] non-root geo-replication session goes to faulty state, when the session is started <https://bugzilla.redhat.com/show_bug.cgi?id=1223741>
17:20 empyrean_ joined #gluster
17:22 empyrean_ hey all, i have a situation where out of lets say, ten bricks, one of them was full. all the disks are of the same size, any insights/possible reasons for this?
17:25 JoeJulian @lucky dht missses are expensive
17:25 glusterbot JoeJulian: https://joejulian.name/blog/dht-misses-are-expensive/
17:26 JoeJulian empyrean_: ^ that article describes how dht works, which should answer your question.
17:26 hagarth joined #gluster
17:26 empyrean_ @JoeJulian I'll go check out that page, thank you.
17:33 bennyturns joined #gluster
17:38 tessier JoeJulian: Any idea why I would only be getting 28MB/s streaming throughput to gluster on a gigabit ethernet network to 7200 SATA disks? I'm doing a simple test with dd if=/dev/zero of=foo bs=1M count=1000
17:39 tessier gluster 3.7 using native client
17:40 JoeJulian No idea. Typical installations can max that out easily.
17:41 JoeJulian You're doing a quarter gigabit which doesn't really offer any suggestions on where to look.
17:45 hagarth tessier: what volume type are you using?
17:45 PeterA joined #gluster
17:46 PeterA any eta on 3.5.4 on ubuntu
17:46 glusterbot News from resolvedglusterbugs: [Bug 1222942] BVT: Posix crash while running BVT on 3.7beta2 build on rhel6.6 <https://bugzilla.redhat.com/show_bug.cgi?id=1222942>
17:46 PeterA ?
17:46 JoeJulian Oh gah! I read streaming throughput as read throughput. Didn't even notice the flow direction.
17:47 JoeJulian kkeithley_, semiosis? ^ eta question
17:47 tessier hagarth: volume type?
17:48 JoeJulian tessier: if that's a replica 2 you're doing full duplex. If that's replica 4 then you're filling your bandwidth.
17:48 tessier It's a replica 2
17:49 JoeJulian So you have a full duplex connection somewhere. Check your switch(es) and your cables.
17:53 tessier You mean half duplex? Everything is supposed to be full duplex...
18:02 eljrax gluster volume get gvol0 all .. I see performance.cache-size mentioned twice in that output
18:03 eljrax Once with 128MB and once with 32MB, which is confusing
18:04 JoeJulian oops, yeah, hehe. I knew what I was thinking. ;)
18:05 eljrax gluster volume get gvol0 performance.cache-size  only shows 32MB
18:06 glusterbot News from newglusterbugs: [Bug 1221473] BVT: Posix crash while running BVT on 3.7beta2 build on rhel6.6 <https://bugzilla.redhat.com/show_bug.cgi?id=1221473>
18:07 empyrean_ JoeJulian: I've read that article, and am somewhat confused, it seems like dht deals mostly w/ reads. I assume now, that before a file was being written unto a brick there will be a lookup? Can't exactly find the bit about why out of ten bricks... one was constantly chosen to be written till its 100% full, while the rest are at sub 80%. Or did I miss something important?
18:10 JoeJulian The filename is hashed. The brick is chosen based on how that filename hash matches the hash allocation for a dht subvolume.
18:11 JoeJulian For instance, if you always created new files called tmpfile1 then renamed them, every file would reside on the same brick because tmpfile1 will (obviously) always have the same hash which will always map to the same brick.
18:11 JoeJulian So there's an element of randomness to which brick a file resides on based on how the filename hashes.
18:12 empyrean_ right
18:14 JoeJulian So if you either create files with the same filename a lot then rename them, or you just got really unlucky with your filename hashes then you could have an abnormal allocation.
18:14 empyrean_ I see.
18:14 JoeJulian Or, perhaps you just have one really big file that's statistically abnormal.
18:15 JoeJulian I would like to recommend a rebalance but that's hit and miss as to whether it's going to work (there's currently a discussion on a dht2 redesign).
18:16 empyrean_ All these scenarios are plausible.
18:18 empyrean_ It was a miss, sort of, disk space was freed, till... probably 1+ GB, from MBs, but it later got filled. Which begs a digress, should I be performing rebalancing multiple times?
18:24 empyrean_ Sorry, kinda new to GlusterFS.
18:24 empyrean_ Sorry, new to GlusterFS.
18:24 JoeJulian no need to apologize. That's why i hang out here.
18:25 JoeJulian You may rebalance multiple times. It doesn't harm anything.
18:25 JoeJulian If nothing needs to move because the hash allocation doesn't change and the files all exist on the correct subvolume according to their hash, nothing will move.
18:26 empyrean_ I see.
18:26 JoeJulian If a brick exceeds cluster.min-free-disk (see gluster volume set help) then a *new* file will be created on the next dht subvolume with sufficient free space.
18:27 JoeJulian But existing files may continue to grow and fill the brick.
18:31 empyrean_ The scenarios you've mentioned should cover 90% of.... possible causes.
18:33 empyrean_ I will look into each scenario further. Thank you for your time.
18:35 arao joined #gluster
18:46 bfoster joined #gluster
18:46 B21956 joined #gluster
18:47 stickyboy joined #gluster
18:58 nsoffer joined #gluster
19:09 bene2 joined #gluster
19:15 jiffin joined #gluster
19:20 Bardack joined #gluster
19:20 Bardack joined #gluster
19:36 arao joined #gluster
19:49 bene2 joined #gluster
19:57 bene2 joined #gluster
20:04 julim joined #gluster
20:06 arcolife joined #gluster
20:08 paulc_AndChat joined #gluster
20:22 marcelosnap joined #gluster
20:23 Twistedgrim joined #gluster
20:23 PaulCuzner joined #gluster
20:24 marcelosnap left #gluster
20:40 woakes07004 joined #gluster
20:40 badone_ joined #gluster
21:12 bene2 joined #gluster
22:16 Twistedgrim joined #gluster
22:26 cholcombe why does gluster show localhost in the pool list instead of the IP address or DNS name like the other peers?
22:27 mrEriksson It always does that
22:27 mrEriksson Probably uses 127.0.0.1 as address for the local node
22:27 cholcombe why not be consistent and show the IP or dns name ?
22:28 mrEriksson Sorry, I don't know
22:28 cholcombe :(
22:29 cholcombe it makes it difficult to code against because no i have to dig up the ip address somehow
22:33 mrEriksson Well, I'm sure someone has a better answer for you, I'm just a user :)
22:48 aaronott joined #gluster
23:07 corretico joined #gluster
23:21 plarsen joined #gluster
23:23 Gill joined #gluster
23:25 DV joined #gluster
23:25 gildub joined #gluster
23:33 theron_ joined #gluster
23:41 arao joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary