Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:29 bala joined #gluster
00:32 Paul-C joined #gluster
00:53 tom[] when you software update a gluster, should you restart each server in series or take em all down and start em all again?
01:02 bala joined #gluster
01:31 bene joined #gluster
01:41 MugginsM joined #gluster
01:42 kumar joined #gluster
01:54 calisto joined #gluster
02:05 harish_ joined #gluster
02:07 haomaiwa_ joined #gluster
02:11 msmith_ joined #gluster
02:13 MugginsM joined #gluster
02:27 jmarley_ joined #gluster
02:40 meghanam__ joined #gluster
02:40 dusmant joined #gluster
02:41 meghanam joined #gluster
02:56 yuga_ joined #gluster
02:57 yuga_ hi
02:57 glusterbot yuga_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
02:59 yuga_ I wish to deploy a storage system for a client who is into video rendering .They have about 600 workstations and around 100 render servers . The existing storage is very slow and they are willing to let me set up a new storage . Can any of you suggest suitable configurations ? The storage has to be fast as the data transferred over workstations would be huge and since it is video that storage has to be cheap as well
03:07 kumar joined #gluster
03:08 jbrooks joined #gluster
03:09 meghanam_ joined #gluster
03:10 meghanam joined #gluster
03:12 hagarth joined #gluster
03:19 bharata-rao joined #gluster
03:26 yuga_ I wish to deploy a storage system for a client who is into video rendering .They have about 600 workstations and around 100 render servers . The existing storage is very slow and they are willing to let me set up a new storage . Can any of you suggest suitable configurations ? The storage has to be fast as the data transferred over workstations would be huge and since it is video that storage has to be cheap as well
03:38 sage_ joined #gluster
03:38 msmith joined #gluster
03:45 jbrooks joined #gluster
03:45 maveric_amitc_ joined #gluster
03:56 mrint21h joined #gluster
03:56 mrint21h Cannot compile glusterFS
03:56 mrint21h http://pastie.org/9731387
03:56 mrint21h well it can compile, fails at make isntall
03:57 itisravi joined #gluster
04:06 shubhendu joined #gluster
04:07 RameshN joined #gluster
04:16 aravindavk joined #gluster
04:20 msmith joined #gluster
04:23 nbalachandran joined #gluster
04:24 elyograg regarding my thread on the mailing list about bug 1010241 and NFS crashes ... it's helpful for decision making to see that JoeJulian no longer recommends 3.4 because large numbers of critical fixes are not backported.
04:24 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1010241 high, unspecified, ---, rgowdapp, CLOSED CURRENTRELEASE, nfs: crash with nfs process
04:24 elyograg sucks because I'm running 3.4.2, though. :)
04:27 feeshon joined #gluster
04:28 jbrooks joined #gluster
04:30 anoopcs joined #gluster
04:32 cyberbootje joined #gluster
04:36 nishanth joined #gluster
04:37 rjoseph joined #gluster
04:39 ndarshan joined #gluster
04:40 kanagaraj joined #gluster
04:43 rafi1 joined #gluster
04:49 soumya_ joined #gluster
04:50 glusterbot News from newglusterbugs: [Bug 1165938] Fix regression test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1165938>
04:51 msmith joined #gluster
04:52 spandit joined #gluster
04:52 jiffin joined #gluster
04:52 msmith joined #gluster
04:52 AaronGr joined #gluster
04:52 kshlm joined #gluster
05:05 dusmant joined #gluster
05:05 pp joined #gluster
05:05 meghanam joined #gluster
05:06 meghanam_ joined #gluster
05:06 RameshN joined #gluster
05:06 bala joined #gluster
05:10 corretico joined #gluster
05:11 atalur joined #gluster
05:12 corretico joined #gluster
05:22 corretico joined #gluster
05:25 corretico joined #gluster
05:27 deepakcs joined #gluster
05:27 corretico_ joined #gluster
05:30 corretico joined #gluster
05:30 shubhendu joined #gluster
05:31 saurabh joined #gluster
05:32 nbalachandran joined #gluster
05:33 feeshon joined #gluster
05:39 lalatenduM joined #gluster
05:43 hagarth joined #gluster
05:44 corretico_ joined #gluster
05:45 corretico_ joined #gluster
05:47 corretico joined #gluster
05:48 ramteid joined #gluster
05:48 ppai joined #gluster
05:53 vimal joined #gluster
05:53 corretico joined #gluster
06:00 overclk joined #gluster
06:00 nbalachandran joined #gluster
06:08 bala1 joined #gluster
06:15 shubhendu joined #gluster
06:18 ndarshan joined #gluster
06:19 meghanam_ joined #gluster
06:19 meghanam joined #gluster
06:23 nishanth joined #gluster
06:26 karnan joined #gluster
06:37 sahina joined #gluster
06:39 kshlm joined #gluster
06:40 msmith joined #gluster
06:42 maveric_amitc_ joined #gluster
06:46 sputnik13 joined #gluster
06:48 sputnik13 joined #gluster
06:49 sputnik13 joined #gluster
06:51 sputnik13 joined #gluster
06:52 DV joined #gluster
07:02 LebedevRI joined #gluster
07:09 Anuradha joined #gluster
07:15 ndarshan joined #gluster
07:17 shubhendu joined #gluster
07:20 nishanth joined #gluster
07:26 meghanam joined #gluster
07:26 meghanam_ joined #gluster
07:29 dusmant joined #gluster
07:29 saurabh joined #gluster
07:30 ricky-ticky1 joined #gluster
07:32 tryggvil joined #gluster
07:36 Fen2 joined #gluster
07:37 RameshN joined #gluster
07:48 aulait joined #gluster
07:48 tvb joined #gluster
07:50 aravindavk joined #gluster
08:01 soumya_ joined #gluster
08:03 Fen2 joined #gluster
08:03 tomased joined #gluster
08:09 [Enrico] joined #gluster
08:10 bala joined #gluster
08:14 elico joined #gluster
08:15 RameshN joined #gluster
08:20 SOLDIERz joined #gluster
08:26 anoopcs joined #gluster
08:28 17SAAYIR3 joined #gluster
08:28 7YUAADRWB joined #gluster
08:29 7F1ABYA8C joined #gluster
08:29 msmith joined #gluster
08:37 lalatenduM joined #gluster
08:47 shylesh__ joined #gluster
08:47 Paul-C joined #gluster
08:47 dusmant joined #gluster
08:49 sahina joined #gluster
08:50 nishanth joined #gluster
08:51 glusterbot News from newglusterbugs: [Bug 1165996] cmd log history should not be a hidden file <https://bugzilla.redhat.com/show_bug.cgi?id=1165996>
08:52 bala joined #gluster
08:52 lalatenduM joined #gluster
08:56 Slashman joined #gluster
09:06 liquidat joined #gluster
09:08 snowboarder04 joined #gluster
09:11 anil joined #gluster
09:12 MrAbaddon joined #gluster
09:16 rjoseph joined #gluster
09:21 glusterbot News from newglusterbugs: [Bug 1158129] After readv, md-cache only checks cache times if read was empty <https://bugzilla.redhat.com/show_bug.cgi?id=1158129>
09:21 glusterbot News from newglusterbugs: [Bug 1158126] md-cache checks for modification using whole seconds only <https://bugzilla.redhat.com/show_bug.cgi?id=1158126>
09:21 bernardo <semiosis> skippy: every single time i've seen someone report random intermittent ping timeouts it has turned out to be a networking issue
09:22 bernardo I'm having disconnections between client and one server brick, but both are on the same node...
09:25 nishanth joined #gluster
09:26 bernardo By the way, it's always the same brick, and about the same hour
09:26 hagarth bernardo: are you on 3.5.2?
09:26 bernardo hagarth: yes
09:27 sahina joined #gluster
09:27 hagarth bernardo: can you fpaste your volume info?
09:27 bernardo sure
09:28 tvb left #gluster
09:29 dusmant joined #gluster
09:29 bernardo hagarth: http://fpaste.org/152387/41647577/raw/
09:30 bala joined #gluster
09:30 hagarth bernardo: can you increase the ping timeout interval?
09:31 hagarth I see that network.ping-timeout is set to 10 seconds
09:32 meghanam joined #gluster
09:32 meghanam_ joined #gluster
09:32 bernardo i was using 3s, i increased to 10s
09:33 mbukatov joined #gluster
09:33 hagarth bernardo: the default is 42s.. only if there's a specific need the default has to be altered
09:34 harish_ joined #gluster
09:35 bernardo hagarth: i know, but it is for kvm storage, so if i increase too much, i fear my vms will go read only on reboot/crash/etc..
09:36 hagarth bernardo: have you tried simulating the problem to see if VMs go read only upon a single storage failure?
09:38 kdhananjay joined #gluster
09:39 ndevos bernardo: instead of changing the ping-timeout, you could think about changing the scsi-timeot inside the VMs (/sys/block/sda/device/timeout)
09:41 bernardo hagarth: i had the problem at first while testing fencing on my cluster nodes (then i changed the default 42s)
09:41 bernardo ndevos: thank you, i will look into it
09:43 bernardo i said it was always the same brick, but i should have said always one of the 3 bricks which are part of the same replica
09:45 pranithk joined #gluster
09:46 pranithk bernardo: Could you provide me with "gluster volume profile <volname> info" output. You need to first "gluster volume profile <volname> start" then wait for the ping-timeout to happen, then provide the output?
09:48 bernardo so, if i have a network problem between two subvolumes/bricks of the same replica, maybe it could still lead to a hang between one client/server even if they are on the same node ?
09:49 hagarth bernardo: the client may stall for ping timeout interval. Once it detects that brick(s) have gone offline, it resumes.
09:50 hagarth bernardo: also note that since client quorum has been set, at least 2/3 of your bricks have to be reachable by the client for writes/updates to go through.
09:50 bernardo pranithk: http://fpaste.org/152391/64770221/raw/
09:51 glusterbot News from newglusterbugs: [Bug 1166020] self-heal-algorithm with option "full" doesn't heal sparse files correctly <https://bugzilla.redhat.com/show_bug.cgi?id=1166020>
09:52 warcisan joined #gluster
09:52 bernardo hagarth: unfortunatly i loosed 2/3 bricks, last time i had to do a restore on a vm because of filesystem corruption
09:52 ndevos oh, maybe I should pickup/rework the change for bug 1099460
09:52 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1099460 high, high, ---, ndevos, POST , file locks are not released within an acceptable time when a fuse-client uncleanly disconnects
09:52 ndevos well, not sure qemu/kvm uses locks...
09:53 hagarth bernardo: ah that's bad. didn't fsck within the VM help?
09:54 anil joined #gluster
09:54 ndevos bernardo: oh, well, you mention that you have issues with the bricks, are those servers also accessing the contents of the volume?
09:54 ppai joined #gluster
09:55 ndevos if not, that bug is rather irrelevant
09:55 hagarth ndevos: I don't think qemu/kvm holds locks as it doesn't expect multiple writers on the VM image file.
09:55 ndevos hagarth: I also think it does not *need* locks, but you never know
09:56 pranithk bernardo: some of the fsyncs are taking order of minutes
09:56 ndevos and, it would be a hypervisor getting the outage, the vm-image could be locked for an extended time frame - that may not be the case here
09:56 pranithk bernardo: pk1@localhost - ~/Downloads
09:56 pranithk 15:25:41 :) ⚡ grep FSYNC max_latency.txt | awk '{print $6}' | grep -E "[0-9]{8,}"
09:56 pranithk 28080755.00
09:56 pranithk 34965398.00
09:56 pranithk 22363912.00
09:56 pranithk 50411451.00
09:57 ndevos pranithk: maybe caused by write-behind?
09:57 ndevos (build up caching)
09:57 pranithk ndevos: one sec
09:58 ndevos bernardo: have you applied the recommendations from http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt#Tuning_the_volume_for_virt-store ?
09:58 bernardo hagarth: fsck broke the filesystem, but the broken FS was split on several vdisks, not all vdisks came RO, so i wonder if it could be the reason (some part of the filesystem writeable but not others)
09:58 ctria|afk joined #gluster
09:58 pranithk bernardo: These are in micro seconds and as you can see some of them are taking ~50 seconds
09:58 Fen1 joined #gluster
10:00 pranithk ndevos: this is the same problem roman.r reported on gluster-users, we asked him to turn off ensure-durability which turns off fsyncing.
10:01 pranithk ndevos: fsyncing is necessary to prevent against filesystem crashes/ power failures of the brick. But it is proving to be too costly for VMs
10:01 bernardo ndevos : these are 3 proxmox (kvm) servers, and glusterfs servers on the same nodes, so yes they are accessing the contents of the volume too (through fuse)
10:01 ndevos pranithk: right, where gets ensure-durability disabled? is that a qemu option?
10:02 pranithk ndevos: gluster. fsyncs are done by afr
10:02 ndevos pranithk: afr injects fsync?
10:02 pranithk ndevos: yes
10:02 hagarth bernardo: it would be interesting to analyze glusterfs logs from the time of fsck failure if you still have them
10:02 ndevos pranithk: why?
10:02 pranithk bernardo: Roman hasn't reported any problems of this ping-timeout expirty/hangs after turning it off
10:03 ndevos pranithk: should the application not instruct when to fsync?
10:03 pranithk ndevos: consider this case: you write data, the filesystem says it is done but the power failure happens and although write system call succeeds the data won't be present on the disk. To prevent these kinds of scenarios
10:03 ndevos so yes, disabling ensure-durability in afr makes sense to me, qemu/kvm should call fsyn when it thinks it needs to do that
10:04 pranithk ndevos: even though afr thinks data is written the replicas will be out of sync
10:04 bernardo pranithk: nice, i will definetly try it, thank you
10:04 anoopcs joined #gluster
10:04 ndevos pranithk: sure, but that is the whole point of having a split write() and fsync() syscall
10:04 pranithk ndevos: didn't understand your question
10:05 bernardo ndevos: yes, i applied all the recomended tuning parametere
10:05 ndevos pranithk: ah, I think I understand why afr needs it, you'll mark the journal as updated, but the data may not have been synced
10:05 bernardo ndevos: parameters* sorry
10:05 pranithk ndevos: bang on target! :-)
10:06 pranithk ndevos: ok, I need to get back to work now. cya
10:07 * pranithk reading ec code. I need to understand it in and out :-)
10:12 SOLDIERz_ joined #gluster
10:15 deniszh joined #gluster
10:18 msmith_ joined #gluster
10:19 tryggvil joined #gluster
10:21 hagarth bernardo: if you get a chance to collect wireshark dumps from the time of ping timeout, it would be helpful to understand the latency that PING requests in gluster are experiencing
10:22 pranithk hagarth: we just found them to from fsync. He will disable fsync
10:24 tryggvil joined #gluster
10:24 hagarth pranithk: ok, a fsync less world is almost performance nirvana :)
10:28 pranithk hagarth: hehe :-)
10:28 pranithk hagarth: there are some improvements we can do. fsync shouldn't undergo the normal timeout that we have
10:29 pranithk hagarth: But there is no way to convey that to rpc library. We either have timeout or no timeout, which is a problem :-(
10:29 pranithk hagarth: I will need to think about it a bit.
10:29 rjoseph joined #gluster
10:40 itisravi_ joined #gluster
10:42 hagarth pranithk: I don't think it is the timeout associated with fsync alone
10:43 hagarth pranithk: we might send a ping for a non-fsync fop after a fsync fop and still be affected by the latency that fsync induces on the server.
10:50 diegows joined #gluster
10:53 pranithk hagarth: damn you are right.
10:58 Norky joined #gluster
11:01 ppai joined #gluster
11:02 shubhendu joined #gluster
11:21 aravindavk joined #gluster
11:22 MrAbaddon joined #gluster
11:22 bernardo Okay, ensure-durability disabled. I will need to ensure what happens in case off power outage through. :)
11:22 bernardo Thank you guys for the help.
11:37 calisto joined #gluster
11:38 kkeithley1 joined #gluster
11:46 SOLDIERz_ joined #gluster
11:50 soumya_ joined #gluster
12:07 msmith_ joined #gluster
12:08 anil joined #gluster
12:10 prasanth_ joined #gluster
12:13 mojibake joined #gluster
12:18 sputnik13 joined #gluster
12:23 ekuric joined #gluster
12:29 edward1 joined #gluster
12:35 ildefonso joined #gluster
12:42 lalatenduM joined #gluster
12:46 getup joined #gluster
12:46 shubhendu joined #gluster
12:48 getup hi, we're rsyncing about 350k files to gluster, and although i don't mind it taking a while to complete when syncing files, rsyncs that run afterwards are terribly slow as well though and we'd like to speed things up a bit if we can
12:49 getup there are some settings on the documenting the undocumented page, but i'm not sure what the defaults are
12:49 Arrfab getup: a lot of files .. small files or big files ?
12:50 getup small, and i know that'll always be relatively slow, but the initial sync im not too worried about, it's the ones that run later
12:52 tdasilva joined #gluster
12:52 getup oh important to mention, we're using nfs, as we can't use the gluster client on the server that sends the data
12:54 getup when i look at the code from glusterd-volume-set.c it seems that performance.nfs.quick-read and performance.nfs.stat-prefetch are off by default, so maybe there's some performance to gain there
13:00 haomaiwang joined #gluster
13:01 Fen1 joined #gluster
13:03 Slashman joined #gluster
13:03 rtalur_ joined #gluster
13:04 lpabon joined #gluster
13:13 anoopcs joined #gluster
13:22 glusterbot News from newglusterbugs: [Bug 1166140] [Tracker] RDMA support in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1166140>
13:34 calum_ joined #gluster
13:36 uebera|| joined #gluster
13:36 uebera|| joined #gluster
13:49 [Enrico] joined #gluster
13:56 msmith_ joined #gluster
13:57 chirino joined #gluster
13:57 ppai joined #gluster
13:58 hagarth joined #gluster
13:59 nbalachandran joined #gluster
14:00 theron joined #gluster
14:03 julim joined #gluster
14:04 bennyturns joined #gluster
14:08 SOLDIERz_ joined #gluster
14:09 wushudoin joined #gluster
14:10 LebedevRI joined #gluster
14:13 bene2 joined #gluster
14:13 getup is there a glusterfs 3.4 client for debian squeeze available? i can't seem to find it.
14:16 diegows joined #gluster
14:16 Anuradha joined #gluster
14:22 snowboarder04 hi all - I set up a test 10G volume with two bricks on two nodes in replication (over VPN) and did an rsync of some 31865 files and eventually rsync just ground to a halt. For the past 24hrs+ all I've been seeing in the glustershd.log are "performing metadata selfheal on..." messages
14:22 snowboarder04 I just grepped and see 715283 lines with "metadata selfheal"
14:22 snowboarder04 o_0
14:23 snowboarder04 is that... normal?
14:27 zerick joined #gluster
14:29 plarsen joined #gluster
14:33 plarsen joined #gluster
14:34 virusuy joined #gluster
14:34 virusuy joined #gluster
14:36 _Bryan_ joined #gluster
14:36 fsimonce joined #gluster
14:40 kshlm joined #gluster
14:40 soumya joined #gluster
14:51 tdasilva joined #gluster
14:55 skippy did you do `rsync --inplace` ?
14:56 skippy because if not, then I'm not surprised that things went south.
14:56 snowboarder04 skippy: nope - should I?
14:56 snowboarder04 ahh
14:56 rafi1 joined #gluster
14:57 snowboarder04 I'll tear it down and build it back up again then :)
14:57 skippy rsync, by defaults, writes to a temp file then renames that to the target file.
14:57 skippy http://blog.vorona.ca/the-way-gluster-fs-handles-renaming-the-file.html
14:58 skippy http://joejulian.name/blog/dht-misses-are-expensive/
14:58 SOLDIERz_ joined #gluster
14:58 msmith_ joined #gluster
14:59 skippy if you use --inplace for the the rsync, it will avoid the temp file, and instead create the expected target file directly.  the DHT computations then work as expected.
14:59 SOLDIERz__ joined #gluster
15:07 ricky-ticky joined #gluster
15:40 bfoster joined #gluster
15:42 jackdpeterson joined #gluster
15:43 jackdpeterson Hey all, Have a question concerning LVM usage -- Currently have some Gluster servers on AWS. I'm allocating storage via EBS volumes in 500G increments. Is there any reason to add in LVM at this point? Structure = Distribute replicate with replica 2
15:44 jackdpeterson Is the added complexity worth it, if so ... what benefit does it confer?
15:45 rjoseph joined #gluster
15:52 [Enrico] jackdpeterson: one big advantage, at least with gluster 3.6 is you can do gluster snapshots, but only when using LVM thin volumes
15:55 jackdpeterson @Enrico, forgive me but I'm hesitant to do any kind of thin-provisioning comes from dealing with enterprisey storage... always was a no-no given the rare scenario that one could over-provision storage. Is that an issue with LVM in this case or is the term thin-provisioning meaning something slightly different?
15:57 [Enrico] jackdpeterson: LVM thin volume an evolution of standard LVM, it is not just about doing thin provisioning
15:57 [Enrico] the way you do snapshots is different
15:57 [Enrico] and that's a feature gluster needs for doing space efficient snapshots
15:58 [Enrico] so doing it with standard LVM would make less if no sense at all with gluster imho
15:58 [Enrico] with think LVM volumes you don't have to reserve the space for the snapshot, you don't give the snapshot a size. It can grow as much as the original volume in the worst case when all data change
15:59 bennyturns joined #gluster
15:59 [Enrico] so you need thin provisioning. OIn the other side if you remove a file from a FS supporting discard/trim on LVM thin volume, LVM will free the used space
16:00 [Enrico] when standard LVM would not know the space is not actually free and will still keep a second copy of them in case of snapshots
16:00 bennyturns joined #gluster
16:00 [Enrico] indeed you must be careful with thin provisioning, but the virtual size can be smaller of the phisical size, giving you back the standard provisioning
16:01 [Enrico] you can always grow it later
16:01 jackdpeterson @Enrico, thanks. that makes sense. Now I gotta debate doing the switch from Ubuntu Gluster servers to say CentOS/RHEL migration. First go at GlusterFS was on ubuntu 12.04 w/ PPAs but I'm not seeing updates there to 3.6 and 12.04 is getting old in terms of FUSE as well
16:02 [Enrico] jackdpeterson: 3.6 is still not release to my knowledge
16:02 [Enrico] but is due soon
16:02 corretico joined #gluster
16:03 [Enrico] jackdpeterson: also in case you use libvirt be aware thin volumes might not play well with it, so do a test in a non production environment first
16:03 [Enrico] older version of libvirt will stop working entirely when a thin volume is present on the system. Current version should ignore them
16:04 jackdpeterson Ah, good to know :-)
16:04 [Enrico] to not have a bad surprise :)
16:05 jackdpeterson I've experienced one of those (NFS mounted  start NFS stale file handle erroring + requiring reboot when adding brick).
16:06 jackdpeterson ~50% of the time. all good w/ Fuse though
16:08 [Enrico] jackdpeterson: also about redhat/centos: be aware there is glusterfs-server package provided with the ditribution, since redhat sells glusterfs with the redhat storage addon now. You can indeed use the repo provided on the community glustrfs website with both rhel and centos. I would recommend rhel 7 since xfsdump (required by glusterfs) is for free in rhel 7, but not included for free in rhel6
16:08 [Enrico] in centos is for free in both cases
16:09 [Enrico] there is no* glusterfs-server provided, sorry
16:09 hchiramm joined #gluster
16:10 jackdpeterson @Enciro, so dependency xfsdump -- included in RHEL7/CentOS7 ... GlusterFS-Server --- requires community release for CentOS. for RHEL7 ... it's included assuming purchased licensing w/ satellite connectivity?
16:12 [Enrico] you should buy RHS if you want gluster from red hat. glusterfs community repo is available for both centos and redhat if you don't want to pay the extra
16:12 [Enrico] I hve no experience with the satellite, so no idea if this makes it different is anyway
16:12 [Enrico] also I was wrong apparently, glusterfs 3.6.0 is out
16:14 [Enrico] yep, release 2 weeks ago
16:14 semiosis 3.6.1 released this week
16:14 [Enrico] oh wow!
16:14 jackdpeterson @semiosis -- any updates on ubuntu ppa availability?
16:15 semiosis jackdpeterson: i'll do precise today
16:15 semiosis also utopic
16:15 jackdpeterson @semiosis -- Cheers!
16:15 semiosis but seriously, consider upgrading to trusty
16:15 semiosis many improvements
16:16 jackdpeterson @semiosis -- I can do that today since  I'm planning on redoing the storage from end-to-end anyways
16:23 jaroug joined #gluster
16:23 coredump joined #gluster
16:24 bernardo i already asked, but i have a strange thing with opened file desciptors on some of my gluster bricks..
16:24 bernardo one one of them, i can see there are 26 threads open in /proc/<pid>/task/, but 2 threads have more than 800000 open fds
16:25 bernardo is there a leak somewhere ? should i worry wbout it or not ? :)
16:33 semiosis bernardo: could be your application leaving them open?
16:39 jackdpeterson hmm, not seeing documentation for CentOS7. Looks like the version available in base repo is 3.4.0.59. Is there a community repo available?
16:40 jackdpeterson J/k
16:41 bernardo semiosis: i only have vms accessing virtual disks, so kvm bug ? more details here http://fpaste.org/152504/14165016/raw/
16:41 semiosis jackdpeterson: ,,(yum repo)
16:41 glusterbot jackdpeterson: The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 19 and later are in the Fedora yum updates (or updates-testing) repository.
16:42 semiosis bernardo: idk much about running VMs on glusterfs
16:43 bernardo semiosis: ok
16:47 RameshN joined #gluster
16:51 theron joined #gluster
17:11 mojibake joined #gluster
17:15 hagarth bernardo: that looks like a bug to me. Will translate that to a bugzilla entry and send a possible patch on release-3.5.
17:16 hagarth sharknardo^^
17:17 sharknardo hagarth: ok :)
17:20 elyograg glusterbot, please give me the url to file a bug
17:20 elyograg hmm.  not workin'.
17:20 hagarth @bug
17:20 glusterbot hagarth: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
17:21 hagarth @fileabug
17:21 glusterbot hagarth: Please file a bug at http://goo.gl/UUuCq
17:21 hagarth elyograg: there you go
17:21 mojibake joined #gluster
17:23 glusterbot News from newglusterbugs: [Bug 1166275] Directory fd leaks in index translator <https://bugzilla.redhat.com/show_bug.cgi?id=1166275>
17:23 lmickh joined #gluster
17:26 elyograg thanks.  I did figure it out on the bugzilla page.
17:27 jackdpeterson Re: thin-provisioning ... what are some sane defaults assuming 500G Physical volumes to allow for snapshotting?
17:28 JoeJulian Depends on usage.
17:29 JoeJulian And predicted usage
17:30 smohan joined #gluster
17:30 JoeJulian This is testing to see what broke with message parser. file a bug . please ignore.
17:30 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:31 JoeJulian odd
17:31 elyograg i must have arrived on the naughty list. :)
17:31 ndevos glusterbot: you dont like it if we ask you to give the URL to file a bug ?
17:32 JoeJulian I upgraded yesterday. Something must have changed.
17:32 ndevos actually, I need to file a bug now...
17:32 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:32 ndevos see, it doesnt like being talked to
17:33 JoeJulian Ah, I see.
17:33 JoeJulian That makes sense.
17:34 JoeJulian If you address glusterbot, or start with @, it takes it as a command.
17:34 JoeJulian And I (long ago) turned off the spam from issuing invalid commands. I don't remember why...
17:35 ndevos aha, and there is a difference between listening in and getting addressed
17:52 elico joined #gluster
17:53 glusterbot News from newglusterbugs: [Bug 1166278] backport fix for bug 1010241 to 3.4 <https://bugzilla.redhat.com/show_bug.cgi?id=1166278>
17:53 glusterbot News from newglusterbugs: [Bug 1166284] Directory fd leaks in index translator <https://bugzilla.redhat.com/show_bug.cgi?id=1166284>
17:58 elyograg there's my bug.
18:03 smohan joined #gluster
18:15 tryggvil joined #gluster
18:18 tryggvil joined #gluster
18:22 maveric_amitc_ joined #gluster
18:25 sputnik13 joined #gluster
18:25 glusterbot News from resolvedglusterbugs: [Bug 1010241] nfs: crash with nfs process <https://bugzilla.redhat.com/show_bug.cgi?id=1010241>
18:32 tryggvil joined #gluster
18:47 chirino joined #gluster
18:58 cultav1x joined #gluster
19:01 plarsen joined #gluster
19:03 maveric_amitc_ joined #gluster
19:04 mojibake1 joined #gluster
19:04 getup joined #gluster
19:08 B21956 joined #gluster
19:13 ricky-ticky1 joined #gluster
19:18 davemc We will be discussing approaches to providing BitRot detection in future GlusterFS releases. Please join us to discuss your ideas and learn more about GlusterFS futures. http://bit.ly/1uXNIIL for details
19:19 davemc For more background, please visit http://www.gluster.org/community/documentation/index.php/Features/BitRot
19:37 davemc Hi folks,
19:37 rafi1 joined #gluster
19:38 davemc there's a posting on the G+ gluster page asking for help. I'm redirecting them to the email and/or IRC, but if someone has time/energy to pop over and see if they can help out, would be good.
19:42 davemc post from Piet Schee: https://plus.google.com/communities/110022816028412595292?cfem=1
19:48 JoeJulian davemc: I've posted several times that the G+ page is not for support. Please reiterate that position.
19:54 davemc JoeJulian, will do.  he said he'll email tomorrow
19:57 jobewan joined #gluster
20:07 semiosis people keep on emailing for support even though I say on the PPAs to not do that.  rarely one will even have the nerve to get mad at me for saying i wont help over email but instead on IRC
20:07 semiosis actually i dont think it's happened since i switched to gluster PPAs :)
20:10 deniszh joined #gluster
20:17 jmarley joined #gluster
20:17 davemc JoeJulian, would you be willing to do a short Gluster Hangout on your role on the Board and where you'd like to see Gluster go?
20:19 davemc semiosis, you're doing the university stuff correct?
20:20 semiosis davemc: yes i have two students working on my ,,(java) project.  we'll be merging their changes in the next few weeks (semester is almost over) so expect an announcement next month sometime
20:20 glusterbot davemc: https://github.com/semiosis/glusterfs-java-filesystem
20:20 semiosis also need to step up the urgency on their swag, johnmark!
20:20 semiosis time's running out
20:21 davemc think you and/or the students would like to give us a video on the project?
20:21 semiosis videos are part of their course requirement.  i'll get an update on that when we meet, most likely on Monday
20:22 semiosis maybe we can satisfy part of their video req with a hangout :)
20:22 davemc great. We'll loop back after that then
20:23 semiosis davemc++
20:23 glusterbot semiosis: davemc's karma is now 1
20:23 semiosis oh that's just silly
20:23 semiosis davemc++
20:23 glusterbot semiosis: davemc's karma is now 2
20:23 semiosis davemc++
20:23 glusterbot semiosis: davemc's karma is now 3
20:23 semiosis davemc++
20:23 glusterbot semiosis: davemc's karma is now 4
20:23 davemc lol
20:23 davemc it's only due to all the negative karma I create yelling at the website
20:24 semiosis hah
20:25 rotbeard joined #gluster
20:27 MrAbaddon joined #gluster
20:37 JoeJulian davemc: I thought so initially, but I'm losing confidence in the board.
20:40 badone joined #gluster
20:41 davemc JoeJulian, might be worth a chat about that
20:41 JoeJulian probably
20:57 badone joined #gluster
21:02 MugginsM joined #gluster
21:02 n-st joined #gluster
21:10 ghenry_ joined #gluster
21:19 ghenry joined #gluster
21:19 ghenry joined #gluster
21:32 cultav1x joined #gluster
21:36 msmith_ joined #gluster
21:38 smohan joined #gluster
21:38 msmith_ joined #gluster
21:39 msmith_ joined #gluster
21:41 gburiticato joined #gluster
21:44 plarsen joined #gluster
22:10 semiosis so i have a replicated volume but i want to double the number of replica sets.  plan is to make a new distribute only volume and copy the data into it, then clone the disks (via ebs snapshots) and enable replication.
22:20 sputnik13 joined #gluster
22:33 tryggvil joined #gluster
22:46 B21956 left #gluster
22:52 sputnik13 joined #gluster
22:53 sputnik13 joined #gluster
23:33 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary