Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2018-01-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 major joined #gluster
00:15 major joined #gluster
00:15 gospod4 joined #gluster
00:37 shellclear joined #gluster
00:38 shyam joined #gluster
00:44 MrAbaddon joined #gluster
01:17 shyam joined #gluster
01:20 gospod4 joined #gluster
01:34 daMaestro joined #gluster
01:34 MrAbaddon joined #gluster
01:46 major joined #gluster
02:00 gospod3 joined #gluster
02:07 atinm joined #gluster
02:10 gospod3 joined #gluster
02:11 prasanth joined #gluster
02:16 jri joined #gluster
02:25 hgowtham joined #gluster
02:26 gospod3 joined #gluster
02:34 Humble joined #gluster
02:34 hgowtham joined #gluster
02:59 nbalacha joined #gluster
03:01 ilbot3 joined #gluster
03:01 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:02 Vishnu_ joined #gluster
03:04 Vishnu__ joined #gluster
03:18 gyadav_ joined #gluster
03:18 durzo joined #gluster
03:20 durzo Hi all, im wanting to upgrade to 3.10.9, but the ubuntu repo builds are all pending, can anyone poke them so the packages start building?
03:22 kramdoss_ joined #gluster
03:30 cliluw joined #gluster
03:31 gospod3 joined #gluster
03:32 Jacob843 joined #gluster
03:40 protoporpoise durzo I'm not certain - but I'd say you'd need to ask the Ubuntu maintainers directly
04:02 durzo protoporpoise, whats the best way to contact them? they used to be built by louis who was in this chat room... the curreny packages havent built since the 4th jan.. seem to be stuck indefinitely
04:03 protoporpoise I'm unsure, we don't use ubuntu for anything anymore but I'd suggest logging a bug https://bugs.launchpad.net/ubuntu/
04:03 glusterbot Title: Bugs : Ubuntu (at bugs.launchpad.net)
04:08 puiterwijk joined #gluster
04:13 kramdoss_ joined #gluster
04:14 itisravi joined #gluster
04:35 jiffin joined #gluster
04:37 gospod3 joined #gluster
04:37 Shu6h3ndu joined #gluster
04:39 sunny joined #gluster
04:46 msvbhat joined #gluster
05:08 Prasad joined #gluster
05:16 jri joined #gluster
05:17 varshar joined #gluster
05:23 skumar joined #gluster
05:24 apandey joined #gluster
05:28 ndarshan joined #gluster
05:30 hgowtham joined #gluster
05:30 rafi joined #gluster
05:33 karthik_us joined #gluster
05:34 prasanth joined #gluster
05:38 sanoj joined #gluster
05:42 gospod3 joined #gluster
05:45 kramdoss_ joined #gluster
05:54 msvbhat joined #gluster
05:56 Saravanakmr joined #gluster
06:02 kotreshhr joined #gluster
06:07 kdhananjay joined #gluster
06:15 kramdoss_ joined #gluster
06:26 xavih joined #gluster
06:28 sunnyk joined #gluster
06:36 Saravanakmr joined #gluster
06:40 voidm joined #gluster
06:48 gospod3 joined #gluster
07:08 psony joined #gluster
07:09 poornima_ joined #gluster
07:10 mbukatov joined #gluster
07:14 voidm joined #gluster
07:16 msvbhat joined #gluster
07:26 aravindavk joined #gluster
07:28 jtux joined #gluster
07:38 owlbot joined #gluster
07:40 jtux joined #gluster
07:48 sunkumar joined #gluster
07:50 owlbot joined #gluster
07:53 gospod3 joined #gluster
08:03 [diablo] joined #gluster
08:10 rafi joined #gluster
08:15 jri joined #gluster
08:19 jri joined #gluster
08:20 ivan_rossi joined #gluster
08:22 jiffin joined #gluster
08:23 msvbhat joined #gluster
08:39 ppai joined #gluster
08:45 omark joined #gluster
08:50 sanoj joined #gluster
08:51 psony|afk joined #gluster
08:55 fsimonce joined #gluster
08:58 susant joined #gluster
08:58 gospod3 joined #gluster
09:03 jiffin1 joined #gluster
09:05 atinm joined #gluster
09:16 Humble joined #gluster
09:24 jiffin joined #gluster
09:33 susant joined #gluster
09:40 arif-ali joined #gluster
10:04 gospod3 joined #gluster
10:23 ppai joined #gluster
10:24 ivan_rossi left #gluster
10:33 rafi1 joined #gluster
10:34 MrAbaddon joined #gluster
10:37 ivan_rossi joined #gluster
10:52 karthik_us|mtg joined #gluster
10:53 msvbhat joined #gluster
11:03 ppai joined #gluster
11:09 gospod3 joined #gluster
11:10 atinm joined #gluster
11:14 skumar_ joined #gluster
11:19 skumar__ joined #gluster
11:19 itisravi joined #gluster
11:24 rafi1 joined #gluster
11:26 shellclear_ joined #gluster
11:28 Vishnu_ joined #gluster
11:29 tontsa joined #gluster
11:31 buvanesh_kumar joined #gluster
11:35 Humble joined #gluster
11:35 karthik_us|mtg joined #gluster
11:38 hgowtham joined #gluster
11:46 shyam joined #gluster
11:54 msvbhat joined #gluster
11:56 ThHirsch joined #gluster
11:56 shellclear joined #gluster
12:07 jiffin1 joined #gluster
12:08 malevolent joined #gluster
12:09 drifterza joined #gluster
12:12 kettlewell joined #gluster
12:14 MrAbaddon joined #gluster
12:15 gospod3 joined #gluster
12:18 jiffin1 joined #gluster
12:20 jkroon joined #gluster
12:37 fury left #gluster
12:40 jkroon joined #gluster
12:55 ppai joined #gluster
13:13 kkeithley FWIW, there is a link on https://launchpad.net/~gluster for contacting the team members
13:13 glusterbot Title: Gluster in Launchpad (at launchpad.net)
13:13 kkeithley just sayin'
13:14 kkeithley And the Launchpad build farm is off-line "for maintenance"
13:15 kkeithley And Louis hasn't been building the Ubuntu packages for well over three years.
13:16 kkeithley also just sayin'
13:16 kkeithley Launchpad build farm status was mentioned over in #gluster-dev yesterday
13:17 kkeithley No ETA has been given for its return.  3.10.9 packages are in the queue to be built
13:19 kkeithley In case that's not obvious, that's 3.10.9 packages for Ubuntu releases.  3.10.9 packages for Fedora, CentOS, Debian, RHEL, and SuSE are available.
13:20 nigelb oh wow, that's a long time to apply KTPI patces
13:20 nigelb *patches
13:20 nigelb *KPTI
13:20 gospod3 joined #gluster
13:24 kkeithley In other news /me wonders when centos6-regression tests will return to normal.
13:32 jkroon joined #gluster
13:36 ppai joined #gluster
13:37 msvbhat joined #gluster
13:38 nbalacha joined #gluster
13:48 sanoj joined #gluster
13:54 aravindavk joined #gluster
13:58 skumar_ joined #gluster
14:00 drifterza joined #gluster
14:00 Asako_ Is there a way to see why gluster commands are timing out?  I can't even run gluster volume status without an error.
14:02 Asako_ 0-transport: EPOLLERR - disconnecting now
14:10 sanoj joined #gluster
14:16 Humble joined #gluster
14:20 Acinonyx joined #gluster
14:25 skylar1 joined #gluster
14:26 kramdoss_ joined #gluster
14:26 gospod3 joined #gluster
14:32 gospod3 i have this "bug" or a situation and its driving me crazy after all these months now
14:32 gospod3 2 nodes replica
14:32 gospod3 SSDs only on both sides
14:33 gospod3 after rebooting one of the nodes, one of the nodes report in "gluster volume status" as there is only one node available
14:33 gospod3 while at the same time the other sees both and 4x Y
14:34 gospod3 so one node has 2x Y and simultaneously other node 4x Y
14:35 gospod3 the huge problem is because im running KVM and VMs act like they just lost main HDD and nothing can be newly opened (like main HDD is readonly)
14:46 phlogistonjohn joined #gluster
14:52 guhcampos joined #gluster
14:58 psony|afk joined #gluster
15:06 jstrunk joined #gluster
15:06 shyam joined #gluster
15:10 jiffin joined #gluster
15:14 jbrooks joined #gluster
15:25 dominicpg joined #gluster
15:31 gospod3 joined #gluster
15:58 gyadav_ joined #gluster
16:18 msvbhat joined #gluster
16:19 vbellur joined #gluster
16:23 john51 joined #gluster
16:27 ThHirsch Hi all - I want to know if it is possible to put a brick on a ZFS Volume (:= block device) rather than on a ZFS dataset (filesystem).
16:27 ThHirsch My hope is to get better performance with my glusterfs volumes (as they are currently NOT performing well, if used through ZFS datasets (RAIDZ1) as underlying filesystem.
16:27 ThHirsch Anyone done/tested the ZFS Volume setup/usage?
16:27 ThHirsch Is it supported?
16:27 ThHirsch Will it increase performance?
16:27 ThHirsch And last but not least - how to configure (I dont'f find any docu/samples)...
16:27 kpease joined #gluster
16:27 snehring I've done this
16:28 snehring as far as being supported, I think it depends on who's doing the supporting
16:29 snehring ThHirsch: to clarify you want to create the bricks on top of zvol block devices?
16:37 skumar_ joined #gluster
16:37 gospod3 joined #gluster
16:53 ndevos ThHirsch: maybe http://docs.gluster.org/en/latest/Administrator%20Guide/Gluster%20On%20ZFS/ helps, and you may want to send improvements through the 'edit on github' link ;-)
16:53 glusterbot Title: Gluster On ZFS - Gluster Docs (at docs.gluster.org)
17:13 ThHirsch yes: I want to create the bricks on top of zvol block devices
17:15 ppai joined #gluster
17:16 snehring ThHirsch: It's pretty straight forward. Create the zvol however large you want it to be and then just follow the normal gluster brick setup instructions but instead of /dev/sdwhatever use /dev/zvol/whatever
17:17 ThHirsch snehring: Do you have any more details on 'how' exactly?  cmds/options used to create the bricks ?
17:17 ThHirsch ndevos: this link is known to me, but it's more about on how to install ZFS and how to installl Gluster (basically).
17:17 ThHirsch The interesting part on how to work with ZFS (specific) is done with ONE sentence: "Continue with your GFS peer probe, volume creation, etc."
17:17 ThHirsch So no details to grab from there.... :-(
17:18 snehring http://docs.gluster.org/en/latest/Administrator%20Guide/formatting-and-mounting-bricks/ details brick setup
17:18 glusterbot Title: Formatting and Mounting Bricks - Gluster Docs (at docs.gluster.org)
17:18 ThHirsch snehring: "but instead of /dev/sdwhatever use /dev/zvol/whatever" - that sounds promising. will do tests now (no options neede to tell brick createion process that this is block device (without Filesystem) rather than file system?
17:19 snehring you'll need to create the lvm thin volume and xfs filesystem on top of them (which that link details)
17:20 morse joined #gluster
17:21 snehring ThHirsch: the only reason I did this myself was to get native gluster snapshot support, I don't know if you'll see any performance improvement from this setup
17:22 snehring an added bonus is zfs sends of the bricks are a heck of a lot faster than georep
17:26 susant joined #gluster
17:30 ivan_rossi left #gluster
17:34 plarsen joined #gluster
17:35 WebertRLZ joined #gluster
17:39 zerick joined #gluster
17:42 zerick joined #gluster
17:42 gospod3 joined #gluster
17:46 jri joined #gluster
17:46 Vapez joined #gluster
17:46 zerick joined #gluster
17:51 jiffin1 joined #gluster
17:54 msvbhat joined #gluster
17:58 illwieckz joined #gluster
17:58 ThHirsch snehring: I do see the added value of zfs send.  BUT if I have to create LVM volume and xsf filesystem on top of ZFS block device (adding 2 layers of complexity) then why is this different from configuring bricks on top of an zfs 'dataset`(does have already a Filesystem but still adding an additional level).
17:58 ThHirsch I might be wrong, but got the impression, that newer Gluster versions can deal directly with block devices.
17:58 ThHirsch I think I can do zfs send on a dataset as well ...
18:00 s34n If I understand glusterfs correctly, a distributed volume should not have more than one brick on any given server
18:01 samppah ThHirsch: glusterfs can use zfs filesystem directly
18:02 snehring ThHirsch: as far as I understand it glusterfs still relies on an underlying filesystem on the blockdevices (that supports xattrs)
18:03 snehring as I said unless you want native snapshot support in gluster there's no real benefit to doing it with zvols instead of on a dataset
18:04 illwieckz joined #gluster
18:08 ThHirsch snehring:  I hoped newer gluster versions are able to support ZFS snapshots in a more 'direct' way (rather than going the block device, LVM, XFS route). There was started some effort on this over a year ago (see e.g: https://www.spinics.net/lists/gluster-devel/msg21035.html ) but I was unable to find out, if this was merged into current gluster version(s) and/or if there is still ongoing effort to achieve this aim.
18:08 glusterbot Title: Re: Question on merging zfs snapshot support into the mainline glusterfs — Gluster Development (at www.spinics.net)
18:09 samppah ThHirsch: i think you could do glusterfs hook to do it
18:09 snehring Yeah I was excited about that, but I don't think it was finished
18:16 ThHirsch samppah: "..glusterfs hook to do it". Can you give some more detail/pointer to direct me as to where to start in diving into this (I will do my search on google as well - thank you for the hint/entry-point).
18:20 samppah ThHirsch: it has been long time since I have been working with hooks. I'm also trying with google to find something to start with :)
18:30 arif-ali joined #gluster
18:48 gospod3 joined #gluster
18:50 mallorn We recently upgraded from 3.10 to 3.13 and noticed that initial filesystem accesses (over fuse) are taking forever now.  Any ideas where to start looking?
18:50 ThHirsch hmm, will bd-xlator  may be an option?  see e.g.: http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/bd-xlator/
18:50 ThHirsch the link deals (again) with LVM Volumes, but anyone out there having this done with ZFS volumes?!
18:50 ThHirsch I would like to put or use the GlusterFS volumes (based on ZFS volumes/block-devices) for storing KVM images
18:50 glusterbot Title: Block Device Translator - Gluster Docs (at staged-gluster-docs.readthedocs.io)
19:02 mk-fg joined #gluster
19:02 mk-fg joined #gluster
19:03 sunny joined #gluster
19:05 skylar1 joined #gluster
19:17 jobewan joined #gluster
19:18 pladd joined #gluster
19:30 MrAbaddon joined #gluster
19:43 WebertRLZ joined #gluster
19:51 rouven joined #gluster
19:51 zerick_ joined #gluster
19:52 major joined #gluster
19:53 gospod3 joined #gluster
20:08 Vapez joined #gluster
20:12 mallorn It appears to hang on the stat() calls when looking at a directory.
20:25 rastar joined #gluster
20:59 gospod3 joined #gluster
21:30 s34n For a server with multiple disks is it better to put one brick on each disk, or to combine the disks using LVM, etc, then have only one brick?
21:32 cmdpancakes joined #gluster
21:34 cmdpancakes hello gluster friends! i recently upgraded a gluster cluster from 3.7.20 to 3.12.4 and everything went more or less fine, but when i attempt a rebalance, things start to scan directories and all that normally, then all of a sudden a peer's brick process will crash
21:35 cmdpancakes the crash itself doesn't give me a whole lot of details to follow up on....where is the best place to follow up on crash dump info?
22:02 masber joined #gluster
22:04 gospod3 joined #gluster
22:04 protoporpoise left #gluster
22:27 vbellur cmdpancakes: a mail on gluster-users would be ideal. If you can provide more details about the crash from the brick log files, that would be very useful.
22:29 cmdpancakes hey vbellur yep i can provide all that, just wondering where my best audience would be
22:29 cmdpancakes should i file a bug also or just mail that group?
22:29 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
23:01 guhcampos joined #gluster
23:10 gospod3 joined #gluster
23:11 investigator_ joined #gluster
23:30 uebera|| joined #gluster
23:40 uebera|| joined #gluster
23:40 uebera|| joined #gluster
23:50 vbellur cmdpancakes: mailing there could be a good start. The concerned developers will open a bug or point to an existing one depending on the case.
23:55 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary