Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 Guest41828 joined #gluster
00:01 vmallika joined #gluster
00:33 PatNarcisoZzZ joined #gluster
00:35 nangthang joined #gluster
00:45 calavera joined #gluster
00:51 topshare joined #gluster
01:00 plarsen joined #gluster
01:18 craigcabrey joined #gluster
01:23 craigcabrey joined #gluster
01:31 Lee1092 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 craigcabrey joined #gluster
01:55 xoritor tessier, you should not need a new switch... just try it without the bond and see if you get any arp issues
01:56 xoritor well you MAY need a new switch i do not know
01:56 xoritor im headed to bed... i turn into a pumpkin in about 4 min
01:56 xoritor good luck
01:57 calavera joined #gluster
01:58 calisto joined #gluster
02:00 hchiramm_home joined #gluster
02:08 jbrooks joined #gluster
02:21 craigcabrey joined #gluster
02:25 calavera joined #gluster
02:35 anrao joined #gluster
02:56 Pupeno_ joined #gluster
03:13 bharata-rao joined #gluster
03:20 TheSeven joined #gluster
03:23 anrao joined #gluster
03:24 hagarth joined #gluster
03:32 ekuric joined #gluster
03:32 topshare joined #gluster
03:36 nangthang joined #gluster
03:37 vmallika joined #gluster
03:45 ppai joined #gluster
03:47 atinm joined #gluster
03:54 kanagaraj joined #gluster
03:58 itisravi joined #gluster
03:58 RameshN joined #gluster
03:59 calavera joined #gluster
04:05 anoopcs9 joined #gluster
04:05 anoopcs9 left #gluster
04:18 calisto joined #gluster
04:24 siel joined #gluster
04:24 yazhini joined #gluster
04:29 nbalacha joined #gluster
04:36 anrao joined #gluster
04:42 sakshi joined #gluster
04:42 haomai___ joined #gluster
04:51 gem joined #gluster
04:52 hgowtham joined #gluster
04:55 nangthang joined #gluster
05:03 kdhananjay joined #gluster
05:07 PatNarcisoZzZ joined #gluster
05:12 anrao joined #gluster
05:28 ilbot3 joined #gluster
05:28 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
05:29 jobewan joined #gluster
05:34 vmallika joined #gluster
05:36 aravindavk joined #gluster
05:37 Manikandan joined #gluster
05:38 spandit joined #gluster
05:39 ndarshan joined #gluster
05:40 arcolife joined #gluster
05:46 topshare joined #gluster
05:48 Bhaskarakiran joined #gluster
05:54 topshare joined #gluster
05:56 deepakcs joined #gluster
05:59 kdhananjay joined #gluster
06:00 smohan joined #gluster
06:11 Philambdo joined #gluster
06:16 jtux joined #gluster
06:19 raghu joined #gluster
06:22 maveric_amitc_ joined #gluster
06:23 schandra joined #gluster
06:26 karnan joined #gluster
06:30 sblanton_ joined #gluster
06:31 smohan joined #gluster
06:33 glusterbot News from newglusterbugs: [Bug 1244098] gluster peer probe on localhost should fail with appropriate return value <https://bugzilla.redhat.co​m/show_bug.cgi?id=1244098>
06:44 nangthang joined #gluster
06:45 jwd joined #gluster
07:04 [Enrico] joined #gluster
07:06 topshare joined #gluster
07:13 saurabh_ joined #gluster
07:22 Trefex joined #gluster
07:28 atalur joined #gluster
07:33 glusterbot News from newglusterbugs: [Bug 1244109] quota: brick crashes when create and remove performed in parallel <https://bugzilla.redhat.co​m/show_bug.cgi?id=1244109>
07:37 Jampy joined #gluster
07:38 Jampy Hello everybody.
07:39 Jampy I have a problem where Gluster (via NFS) does create named pipe files instead of unix sockets.
07:39 ndevos Jampy: what version are you on?
07:39 Jampy "/var/log" becomes a pipe and thus syslog won't work
07:40 Jampy ndevos, 3.5.2-1 (debian package)
07:40 ndevos its bug 1235231 that teknologeek reported a while back
07:40 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1235231 medium, medium, ---, ndevos, MODIFIED , unix domain sockets on Gluster/NFS are created as fifo/pipe
07:40 schandra poornimag: ping .
07:41 Jampy ndevos: I know (I commented to that bug), but it seems to me that there is no progress, or am I wrong?
07:41 ndevos Jampy: I can backport it to 3.7, 3.6 and 3.5, but for now you need to place the socket somewhere else :-/
07:42 ndevos Jampy: well, its  a bug against the mainline version and that got fixed, I didnt see your version request earlier
07:42 Jampy ndevos: unfortunately, that's not an option for me. my root fs is completely on gluster (OpenVZ container in Proxmox using Gluster for replicated shared storage)
07:42 fsimonce joined #gluster
07:43 ndevos Jampy: ah, I can see that it makes things more difficult... maybe you can place the socket in /dev/shm/... ?
07:44 ndevos Jampy: I can get yo ua patch for 3.5, but I have no idea how to build debian packages, you would need to do that yourself, or find someone that can help with that
07:45 Jampy ndevos: I already thought about doing some trick like that. the problem is: this affects a number of containers and I can't catch all sockets (for example, postfix uses quite a number of sockets itself)
07:45 Jampy ndevos: well, that backport would be defenitely a start :)
07:46 Jampy ndevos: I'll try to get in touch with the Debian people...
07:46 ndevos Jampy: oh, yes, I guess changing all sockets for all applications in all containers would be a little annoying :)
07:46 ndevos Jampy: I'm cloning the bug for the other versions, once that is done, you can file a debian bug and point to the gluster bug with the patch
07:47 Jampy ndevos: awesome!
07:50 Jampy ndevos: should I watch bugzilla to know when the patches are done?
07:50 ndevos Jampy: you could, bug 1244118 is for 3.5
07:50 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1244118 medium, medium, ---, ndevos, ASSIGNED , unix domain sockets on Gluster/NFS are created as fifo/pipe
07:53 Jampy ndevos: that's good news to me. I don't want to stress you, but any idea how long that could take? hours? days? weeks?
07:54 ndevos Jampy: I'll do 3.7, 3.6 and then 3.5, shouldnt take long, the patches will land within the hour if nothing else important pops up to distract me
07:54 Jampy ndevos: you made my day :)
07:55 ndevos Jampy: after that, the patches need some basic testing and review before they get merged, once merged, the next 3.x.y update will include the fix
07:57 Jampy ndevos: ah, how soon would be realistic for the release? just need to do some planning..
07:58 ndevos Jampy: 3.5.5 just was released ~2 weeks ago, if you really need a fix soon, we can do a 3.5.6 this weekend - I cant say much about the debian packages
08:02 ctria joined #gluster
08:02 Jampy ndevos: I'm actually a bit under pressure, yes. I'll contact the package maintainer so that he will prepared. I hope that will be quick, but OTOH the package ist currently at version 3.5.2 ... :-/
08:03 ndevos Jampy: there are packages on download.gluster.org that are built by the Gluster Community, those are more recent
08:03 glusterbot News from newglusterbugs: [Bug 1233025] GlusterFS 3.7.3 tracker <https://bugzilla.redhat.co​m/show_bug.cgi?id=1233025>
08:04 ndevos Jampy: semiosis and the-me are the debian package maintainers, if I remember well, but both are more lurking here than active ;-)
08:06 Jampy ndevos: I just noticed Proxmox provides it's own gluster packages in the Proxmox repository (where my current installed package is from) - don't know why that is. Maintainer is Patrick Matthäi.
08:16 ndevos Jampy: thats the debian maintainer :)
08:18 Jampy ndevos: I see. I'm wondering why Proxmox does offer gluster packages in their own repo...
08:19 ndevos Jampy: I wouldnt know... Romeor is an other Proxmox user, maybe he knows
08:20 Jampy Romeor: do you know? ;-)
08:24 kovshenin joined #gluster
08:25 Arrfab joined #gluster
08:25 nhayashi joined #gluster
08:29 Jampy I guess they are backporting, since the Debian Wheezy package is still on 3.2.7 (Proxmox is based on Debian Wheezy = oldstable)
08:31 autoditac joined #gluster
08:35 elico joined #gluster
08:42 morse joined #gluster
08:47 Jampy okay, have a nice day nvdevos, and good luck with the patching/releasing..! :-)
08:54 Trefex1 joined #gluster
08:56 al joined #gluster
08:57 ndevos Jampy: patches are ready and tests are being run on them, for all versions down to 3.5
09:01 ajames-41678 joined #gluster
09:03 smohan joined #gluster
09:03 jiffin joined #gluster
09:17 atinm joined #gluster
09:18 VeggieMeat joined #gluster
09:19 cleong joined #gluster
09:26 Philambdo joined #gluster
09:33 rjoseph joined #gluster
09:35 VeggieMeat joined #gluster
09:35 the-me ndevos: Jampy: :-)
09:37 Trefex joined #gluster
09:40 sabansal_ joined #gluster
09:49 harish joined #gluster
09:50 atinm joined #gluster
09:51 nbalacha joined #gluster
09:53 perpetualrabbit How can I make a second head-node for failover?
09:57 maveric_amitc_ joined #gluster
09:58 Slashman joined #gluster
10:02 ira joined #gluster
10:07 deepakcs joined #gluster
10:11 DV joined #gluster
10:12 nbalacha joined #gluster
10:14 PatNarcisoZzZ joined #gluster
10:17 * tessier sighs
10:21 jwd joined #gluster
10:22 tessier I think my earlier problems were hardware. I moved over to another set disk servers and the packet loss problems went away. But gluster is still slower than crap for me. :( If there's a way to make it host my VM images quickly I sure can't find it.
10:22 tessier So now, after weeks of messing with it, I have to go back to iscsi. :(
10:27 jwd joined #gluster
10:36 sabansal_ joined #gluster
10:37 sakshi joined #gluster
10:42 rwheeler joined #gluster
10:49 Trefex1 joined #gluster
10:51 itisravi joined #gluster
10:51 anrao joined #gluster
10:53 LebedevRI joined #gluster
10:54 malevolent joined #gluster
10:56 vmallika joined #gluster
11:07 Manikandan joined #gluster
11:14 autoditac joined #gluster
11:20 sadbox joined #gluster
11:22 Manikandan_ joined #gluster
11:25 vmallika joined #gluster
11:26 firemanxbr joined #gluster
11:26 elico joined #gluster
11:28 VeggieMeat joined #gluster
11:31 Manikandan joined #gluster
11:32 VeggieMeat joined #gluster
11:32 malevolent joined #gluster
11:32 xavih joined #gluster
11:34 glusterbot News from newglusterbugs: [Bug 1220173] SEEK_HOLE support (optimization) <https://bugzilla.redhat.co​m/show_bug.cgi?id=1220173>
11:40 VeggieMeat joined #gluster
11:59 Slashman joined #gluster
12:08 VeggieMeat joined #gluster
12:08 suliba joined #gluster
12:10 jtux joined #gluster
12:10 obnox joined #gluster
12:19 patrick__ joined #gluster
12:23 rjoseph joined #gluster
12:26 VeggieMeat joined #gluster
12:30 B21956 joined #gluster
12:31 xoritor joined #gluster
12:37 ajames41678 joined #gluster
12:44 VeggieMeat joined #gluster
12:45 unclemarc joined #gluster
12:52 julim joined #gluster
12:53 julim joined #gluster
12:59 patrick__ joined #gluster
13:03 xoritor joined #gluster
13:04 glusterbot News from newglusterbugs: [Bug 1244191] auth-cache should be listed in statedump <https://bugzilla.redhat.co​m/show_bug.cgi?id=1244191>
13:06 wkf joined #gluster
13:06 glusterbot News from resolvedglusterbugs: [Bug 1244098] gluster peer probe on localhost should fail with appropriate return value <https://bugzilla.redhat.co​m/show_bug.cgi?id=1244098>
13:06 bene joined #gluster
13:08 shyam joined #gluster
13:08 ekuric joined #gluster
13:12 dgandhi joined #gluster
13:21 shaunm_ joined #gluster
13:22 ndevos anything important happened in Gluster land this week? add it to https://public.pad.fsfe.org/p/gluster-weekly-news please!
13:24 mpietersen joined #gluster
13:24 DV joined #gluster
13:26 RameshN joined #gluster
13:26 VeggieMeat joined #gluster
13:27 plarsen joined #gluster
13:29 cyberswat joined #gluster
13:29 georgeh-LT2 joined #gluster
13:31 Sjors ndevos: I think it's great that the Gluster project is stepping up its community efforts :)
13:31 Sjors ndevos: thanks for that, to you and the other project members :)
13:32 ndevos Sjors: thanks, nice to hear that its appreciated!
13:32 * ndevos hopes there soon is a Gluster Community Manager hired
13:33 Sjors that would be great :)
13:33 nsoffer joined #gluster
13:34 hamiller joined #gluster
13:42 l0uis I currently have a gluster cluster made up of 10 bricks: 5 nodes, 2 bricks per node, 2 drives per brick in raid 0. I'm running distribute w/ replica count = 2. The individual drives themselves are 3TB, so each brick is 6TB (2 drives, raid 0). What I'm planning on doing is replacing the 2 drive brick with a 1 drive brick, 6 TB. So I will go from a 10 brick cluster to a 20 brick cluster.
13:44 l0uis So simplisticly, right now the brick1 is /dev/md1 let's say. When I do the conversion and it becomes /dev/sda1 (replace the 2 drive raid 0  with a single drive), what do I need to do w/ gluster to have it pick that up? Anything at all?
13:46 shyam l0uis: Are you replacing the existing brick with a new one, or adding a new brick? IOW, the cluster would remain at 10 bricks at the end of this conversion, or be a 20 brick cluster?
13:46 l0uis shyam: The first step is to replace the existing bricks with new ones to free up additional slots for new drives and bricks as a second step.
13:47 l0uis The systems have space for 4 drives. So by going from a brick being 2x3TB in raid 0 to 1x6TB I'll open up 2 additional slots per system.
13:48 jmarley joined #gluster
13:48 l0uis So I'm trying to map out how to do the transition of the existing bricks first. For example, should I clone the data onto the new 6TB drive before swapping it with the existing raid0 brick? Or should I pull the 2 existing 3TB drives in the raid0 array, put in the new 6TB drive, mount it up empty, and let gluster heal it w/ the replica.
13:49 shyam l0uis: ok, so replace 2 drives with a single drive -> replace the brick using the 2 drives to the single drive -> free up 2 slots -> add more drives -> add more bricks -> rebalance so that the cluster is balanced, would this be a summary of what you intend to do?
13:50 shyam l0uis: Hmmm... there are a few strategies around this, can you take a downtime?
13:50 l0uis shyam: thats a good summary. downtime is ok.
13:50 shyam l0uis: or can your cluster take a downtime, to clone/copy the Raid0 set onto the 6TB drive (and what time this takes). If this is the case, this is possibly the fastest route to replacing the brick
13:52 l0uis That is what I had in my initial plan. (a) take the node down. (b) pop out a drive in brick2. (c) put in new 6tb drive. (d) clone brick1 onto new 6tb drive w/ dd or whatnot. etc
13:52 shyam l0uis: Perform any healing of files before this, assuming there are files that need healing across a replica
13:52 shyam Also, what is the source for the clone?
13:52 l0uis so all gluster knows ab out the underlying brick drives are is the mount point right? as long as it clones bit for bit the old raid0 to the new single drive i should be able to just mount it up where it needs to be and gluster is happy?
13:52 shyam If you popped out the 23TB drives, where is the source?
13:53 l0uis each system has 4 drives, 2 bricks each. i'll pop out the drives for brick2 to make room for the 6tb drive to clone brick1
13:53 l0uis and vice versa. obviously during downtime.
13:53 shyam l0uis: yes, if the mountpoint remains the same on the same host, then a bit by bit copy would be fine
13:53 shyam l0uis: understood
13:53 l0uis ok excellent.
13:53 shyam l0uis: mind elaborating the plan on the users list, in case some other dev has a different idea or sees a gotcha
13:54 shyam I see this as fairly straight, as cloning the disk set would be the fastest to achieve the copy (to my knowledge)
13:54 l0uis Definitely, I'll send a note out.
13:54 shyam l0uis: thanks
13:55 l0uis I appreciate the help. I figure it'll take between 2-3 hrs to do the clone.
13:55 bene joined #gluster
13:55 shyam l0uis: Here is something else that may interest you: http://review.gluster.org/#/c/8503/3/doc/adm​in-guide/en-US/markdown/admin_replace_brick.md
13:57 l0uis Cool thanks. It looks like that document he is moving the mount point?
13:57 l0uis I shouldn't have to do a remove brick etc in my scenario I don't think, since I'm just replacing the underlying hard drive and not the mount point, right?
13:58 ajames-41678 joined #gluster
13:58 kovshenin joined #gluster
14:00 shyam l0uis: yes, that is the replace brick pattern, bottom half of that document is a replace brick in a replica set. A little different than your use case, but mechanics are the same if you let Gluster do the healing/copying than a clone of the older brick, anyway jFYI :)
14:01 l0uis ahh gotcha.
14:05 julim_ joined #gluster
14:08 Trefex joined #gluster
14:09 calisto joined #gluster
14:11 shyam joined #gluster
14:17 bennyturns joined #gluster
14:18 Jampy ndevos: great! so let's hope the tests go well :)
14:18 ndevos Jampy: automated tests passed, so that looks good :)
14:19 Manikandan joined #gluster
14:19 Jampy ndevos: just noticed the "Proxmox installations of Debian 8 fail when the VM image is stored on a Gluster volume" notice in the "Gluster weekly news". Are you familiar with that problem? I'm not having any problems with Debian 8 VMs on Gluster/NFS... at least I did not notice anything..!?
14:21 l0uis shyam: As an alternate option I wonder if it's possible to go from replica 2 to replica 1. Replica 2 really isn't required, but when I built the cluster I went safe.
14:22 l0uis shyam: Is this possible? If so it will buy me the necessary time before I rebuild the thing again in 12 months or so.
14:22 shyam l0uis: I am not sure, how to achieve distribute-replicate to distribute only...
14:22 nbalacha joined #gluster
14:26 ndevos Jampy: thats a problem Romeor reported, here and on the mailinglist
14:26 l0uis shyam: k thanks
14:31 Jampy ndevos: apparently the weekly news are outdated, as apparently it was just a network problem of that user ;) https://www.gluster.org/pipermail/​gluster-users/2015-May/021919.html
14:32 topshare joined #gluster
14:32 Jampy ndevos: please ignore what I just said *g*
14:33 Slashman joined #gluster
14:33 topshare joined #gluster
14:35 pppp joined #gluster
14:36 ndevos Jampy: right, sorting IRC discussions isnt very straight forward :)
14:39 _dist joined #gluster
14:39 Jampy ndevos: okay, I have to go - have a nice day. looking forward to see a new release soon.... bye ;)
14:40 ndevos bye Jampy!
14:45 Bhaskarakiran joined #gluster
14:47 ninkotech_ joined #gluster
14:47 ninkotech__ joined #gluster
14:51 calavera joined #gluster
14:53 calavera joined #gluster
14:57 Telsin joined #gluster
15:01 calavera joined #gluster
15:04 Trefex joined #gluster
15:05 Telsin left #gluster
15:13 shyam joined #gluster
15:14 calisto joined #gluster
15:26 elico left #gluster
15:27 mpietersen joined #gluster
15:32 aaronott1 joined #gluster
15:37 ank joined #gluster
15:44 Saravana_ joined #gluster
15:44 jdossey joined #gluster
15:49 cholcombe joined #gluster
15:53 shyam joined #gluster
16:03 mpietersen joined #gluster
16:06 ueberall joined #gluster
16:07 RameshN joined #gluster
16:07 aaronott joined #gluster
16:11 kdhananjay joined #gluster
16:37 nbalacha joined #gluster
16:54 bennyturns joined #gluster
16:54 jobewan joined #gluster
17:05 glusterbot News from newglusterbugs: [Bug 1244290] Fix invalid logic in tier.t <https://bugzilla.redhat.co​m/show_bug.cgi?id=1244290>
17:08 ira joined #gluster
17:08 Ulrar left #gluster
17:13 Rapture joined #gluster
17:25 wkf joined #gluster
17:32 ank joined #gluster
17:32 Saravana_ joined #gluster
17:32 kampnerj joined #gluster
17:48 Romeor deem, Jampy left
17:48 Romeor just curious if hes running proxmox also
17:48 Romeor cuz i know that there are no problems with virt
17:48 Romeor ovirt*
17:55 Romeor have some one used virt-manager with glusterfs?
17:55 Romeor does it support libgfapi ?
17:56 Romeor I mean out-of-the box, so i don't need to configure something else
17:56 Romeor if i select disk image that located on gluster, it uses libgfapi automagically ?
18:09 JoeJulian Romeor: http://imgur.com/cY1NndS
18:10 haomaiwang joined #gluster
18:17 haomaiwa_ joined #gluster
18:21 togdon joined #gluster
18:21 hchiramm_home joined #gluster
18:23 redbeard joined #gluster
18:28 haomaiwang joined #gluster
18:42 haomaiwa_ joined #gluster
18:45 jmarley joined #gluster
18:46 dgbaley Romeor: you can add a storage pool, or skip it and add storage with gluster://<host>/<vol>/<path> directly.
18:59 haomaiwa_ joined #gluster
19:12 calavera joined #gluster
19:12 haomaiwa_ joined #gluster
19:17 kampnerj I have a storage pool configured and can see the images from virt-manager, but I get an error when trying to create a new virtual machine from the image.
19:17 kampnerj the storage pool is gluster
19:18 kampnerj Error starting domain: internal error: process exited while connecting to monitor: ....
19:20 kampnerj I can create a virtual machine and run the images from the local filesystem, and when I mount the gluster filesystem from the system.
19:20 kampnerj but not from the gluster filesystem storage pool
19:21 dgbaley kampnerj: take a look in /var/log/libvirt/qemu/<name>.log for more info. Figure out the path  to your qemu binary and use $(ldd <path> | grep gfapi) to see if your qemu supports it
19:23 harish joined #gluster
19:24 kampnerj like this?
19:25 JoeJulian Also make sure allow-insecure is set
19:25 kampnerj ldd /usr/libexec/qemu-kvm | grep gfapi
19:25 kampnerj libgfapi.so.0 => /lib64/libgfapi.so.0 (0x00007f6cc35a7000)
19:25 kampnerj allow-insecure is set on the gluster server, right?
19:26 Asmadeus joined #gluster
19:26 JoeJulian server, in /etc/glusterfs/glusterd.vol and for the volume ('gluster volume set help' for the exact setting).
19:27 JoeJulian @lucky gluster virtual allow-insecure
19:27 glusterbot JoeJulian: http://www.gluster.org/community/documenta​tion/index.php/Libgfapi_with_qemu_libvirt
19:29 kampnerj yes, I had set that previously: option rpc-auth-allow-insecure on
19:29 l0uis shyam: Sent the note to the list. For everyone else, would appreciate any thoughts re: migrating bricks from a 2 drive raid 0 to a single HD detailed here: http://www.gluster.org/pipermail/g​luster-users/2015-July/022865.html
19:29 l0uis shyam: thanks again for your earlier help
19:32 kampnerj allow-insecure is on
19:32 haomaiw__ joined #gluster
19:33 kampnerj same problem
19:36 togdon joined #gluster
19:37 xoritor ok so if i have 5 servers an each server has 1 brick (5 bricks) and i want to do distrepl i can only use 4 and 1 will be a spare correct?
19:38 xoritor unless i add one more drive ie... (add in pairs)
19:40 xoritor and whenever i do add it has to be a multiple of the replica count (in this case 2)
19:44 JoeJulian xoritor: I've done odd numbers of servers with even numbers of drives. Or even even numbers of partitions.
19:44 JoeJulian But there is no "hot spare".
19:44 xoritor yea i didnt mean a "hot" spare
19:44 xoritor i meant one drive not used that i can add if a drive dies
19:44 JoeJulian But idle hardware is wasted hardware, imho.
19:44 anrao joined #gluster
19:44 xoritor yes
19:44 xoritor true
19:44 xoritor any way for me to use it?
19:45 xoritor other than buy a new drive?
19:45 JoeJulian With 1 drive each, split the drives into 2 partitions.
19:45 JoeJulian Pair them up in an offset ring.
19:45 xoritor hmm
19:45 xoritor whats the performance hit on that?
19:45 JoeJulian None
19:45 Romeor dgbaley, so if i create a storage pool, virt-manager will use libgfapi automatically/
19:46 xoritor really?
19:46 JoeJulian The only limitation would be the max-file-size of the reduced partition size.
19:46 xoritor 500G vs 1TB
19:46 JoeJulian right
19:47 xoritor that COULD be an issue
19:47 * Romeor is really thingking of moving to pure debian with qemu-kvm and virt-manager
19:47 * JoeJulian runs openstack at home.
19:47 Romeor openstack is overkill for my tasks
19:47 dgbaley Romeor: either way, libvirt is passing gluster:// to qemu. The storage pool only helps you point and click to choose your disk. It also looks up the disk size in some dialogs.
19:47 JoeJulian Did you see the "at home"
19:47 dgbaley JoeJulian: yeah, why not just use libvirt?
19:48 xoritor i despise openstack
19:48 dgbaley Romeor: did you say you were using proxmox?
19:48 JoeJulian Definitely overkill for home. :)
19:48 Romeor dgandhi, yes
19:48 Romeor JoeJulian, its overkill for home too :)
19:48 Romeor head node, slave node - who needs that
19:49 JoeJulian dgbaley: I use openstack all day at work, so I'm comfortable with the significant learning curve necessary for setting it up. I run trunk at home so I know what changes are coming up at work.
19:49 xoritor i am about to setup ovirt with the self hosted engine so i dont have to deal with openstack
19:49 dgbaley I run two OpenStack clusters and I run the controller as a VM too, but through libvirt directly. So even with 1 physical server, you still get the separate controller
19:49 JoeJulian Plus, it's nice to have the html gui so I can manage stuff when I'm away from home from my phone.
19:50 * Romeor runs W7 with its virtualization at home. pretty happy.
19:50 * xoritor takes the ice pick out of his eye
19:50 JoeJulian gluaterbot: kban Romeor for mentioning W7.
19:50 dgbaley JoeJulian: Ah, running from master is nice. But I want my stuff at home to just work ^TM, so I use libvirt/virt-manager on a small supermicro itx box.
19:51 Romeor JoeJulian, you already took my +v from me, enough!
19:51 JoeJulian lol
19:51 haomaiwang joined #gluster
19:52 * Romeor is playing teso
19:52 dgbaley Romeor: can you run proxmox directly, not as an appliance through their install CD?
19:53 dgbaley I liked Proxmox's stability, but if I did have problems with something like gluster, I felt like I was completely hopeless and just moved on.
19:53 xoritor JoeJulian, by offset ring you just mean server1/brick1 server2/brick1 server3/brick1 server4/brick1 server5/brick1 server1/brick2 server2/brick2 server3/brick2 server4/brick2 server5/brick2
19:53 Romeor you can run debian linux using proxmox repos, but you will end up with their GUI and kernel anyway. so what the difference/
19:53 xoritor right?
19:54 l0uis xoritor: that's precisely what we do in a 5 server 10 brick setup w/ replica 2
19:54 dgbaley Are you using virt-manager with proxmox, or trying to see if you can commit to replacing it?
19:55 dgbaley Romeor: ^
19:55 xoritor l0uis, thanks
19:55 haomaiw__ joined #gluster
19:55 Romeor virt-manager is not compatible with proxmox afaik. so i'm just thinking about moving to pure debian
19:56 Romeor both virt-manager and virhs
19:56 xoritor have you tried ovirt?
19:56 xoritor just curious?
19:57 dgbaley Romeor: I felt like proxmox was fading away, so if you stick with it I think sooner or later you'll regret it
19:58 dgbaley If it helps I've had good luck with libvirt+kvm+qfapi on CentOS 7.
19:59 xoritor what kind of luck would i have if i added one of the servers later
20:00 Romeor why you felt this way? proxmox will live pretty long way. they got vision and goals and there is nothing better for virtualizing enterprise infrastructure. ovirt is overkill, i'm not even talking about openstack
20:00 xoritor i can do it now, but it would be easier to do it later
20:00 kampnerj dgbaley:   what version of gluster are you using?
20:00 dgbaley kampnerj: 3.7.x
20:01 * Romeor is just wandering if there some special configuration needed to get working libgfapi with virt-manager
20:01 Romeor and no, i won't go centos way :D
20:01 dgbaley Romeor: is your transport tcp or rdma?
20:01 Romeor tcp
20:01 JoeJulian I tried ovirt, but when the root proxy service (I forget what it's called, something that runs commands with root permissions) was leaking memory to the point where 32gig was used up in an hour, I had to give up on it.
20:01 kampnerj any reason to think it wouldn't work with 3.5?
20:01 Jampy joined #gluster
20:02 dgbaley JoeJulian: that's java for you.
20:02 kampnerj I can't get libvirt-kvm-qfapi to work on Centos 7
20:02 xoritor JoeJulian, i have had issues with it in the past
20:02 xoritor lol... hoping they fixed them by now
20:02 Jampy Hello. I get a number of "remote operation failed: No such file or directory" messages in my Gluster logs, but can't figure out whats wrong. Maybe anyone here can help me?
20:02 JoeJulian Yeah, java's the other reason. If something's wrong with openstack - it's python. I'll just fix it.
20:02 dgbaley kampnerj: I don't know you have to find the error logs, you've got glusterd, brick logs, client logs, qemu logs, libvirt logs
20:02 Jampy I collected some info at http://pastebin.com/ZzRuLucf
20:02 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:02 xoritor JoeJulian, have you ever successfully upgraded openstack
20:03 JoeJulian xoritor: many times.
20:03 Romeor Jampy, ur back!!!!
20:03 Jampy Romeor: hi :)
20:03 xoritor and it didnt break
20:03 xoritor is it ubuntu or red hat rdo?
20:03 dgbaley kampnerj: also I am not sure if the gfapi lib that qemu is linked against is binary compatible with gfapi 3.5, 3.6, 3.7
20:03 JoeJulian xoritor: no
20:03 Romeor Jampy, u've mentioned that u never had problems with d8 and gluster. do u run proxmox with glusterfs backend also or we are speaking about other VE?
20:04 JoeJulian xoritor: to both... it didn't break and it's neither ubuntu nor rdo.
20:04 xoritor JoeJulian, my hats off to you
20:04 xoritor JoeJulian, it is notorious for breaking on upgrades
20:04 JoeJulian We build our own arch packages in virtualenv.
20:04 shaunm_ joined #gluster
20:04 Romeor notorious was a rapper
20:04 xoritor archlinux?
20:04 JoeJulian yep
20:05 xoritor aaah
20:05 xoritor nice
20:05 dgbaley xoritor: i upgraded g* -> icehouse -> juno -> kilo no problems. Although now I'm switching from openvswtich/gre to linuxbridge/vlan and have opted to do a fresh install
20:06 JoeJulian dgbaley: why the switch away from openvswitch? (not that there's anything wrong with that, just curious)
20:06 Jampy Romeor: Using Proxmox 3.4 in HA mode (3 nodes) and Gluster is my main shared storage (3 bricks, replicated). A handful Debian 8 VMs and some more Debian 8 CTs run fine with it. I
20:06 Romeor give me a gun with a silver bullet, i'll shot myself
20:06 dgbaley JoeJulian: Because it's the component I've had the most problems with. I never liked a userspace daemon for networking, you still are using linuxbridges anyway, it's slower, debugging is more difficult, ...
20:06 Romeor Jampy, pls pm me with ur volume settings and poxmox versions...
20:06 * xoritor hands Romeor "the gun"
20:07 xoritor careful now... thats loaded
20:07 Romeor Jampy, and underlying FS for gluster and its settings
20:07 Jampy Romeor: http://pastebin.com/ZzRuLucf - you find it in my problem report, which is why I came here this time ;)
20:07 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:07 * JoeJulian dreams of a day when openvswitch is offloaded to the NIC.
20:08 Romeor it can't be this way.. just can't be! !!!!!!! i run 4 nodes and every has this problem with d8
20:08 dgbaley JoeJulian: ah, is someone working on that?
20:08 haomai___ joined #gluster
20:08 JoeJulian I don't know. I can dream, though.
20:08 xoritor JoeJulian, you should start working on that tomorrow
20:09 saltsa_ ccccccdugcrfeulbvdckrjcvjvnjubgggvdcktnddnet
20:09 xoritor and that does not mean you have to write code
20:09 Romeor Jampy, wrong link?
20:09 xoritor just get enough people interested in it to do it for you ;-)
20:10 dgbaley JoeJulian: I also had to use ovs initially because our network was ipoib. I just got a 40G switch and 40G NICs so I'm redoing the networking anyway and it's a good opportunity to get rid of OVS which does nothing for me.
20:10 Jampy Romeor: did you get my PM?
20:10 xoritor ovs is PITA to debug
20:10 dgbaley As far as OpenStack goes, there are some features that only work with OVS (like DVR which I don't use), but that's just because no one implemented it for linuxbridge, there's no technical limitation
20:11 dgbaley It's a bad sign that neutron implements its firewall in iptables even when using ovs -- but I don't know the true reason this was done
20:12 JoeJulian The problem in getting something like hardware implementation for ovs switching is that none of the hardware manufacturers would be interested. They all have it in their thick sculls that firmware has value. Just make the best hardware, make an open hdk, and the world will beat a path to your door.
20:13 dgbaley JoeJulian: well, part of the thing about SDN is the S
20:13 kampnerj I get an error in the qemu log. Can I post it here?
20:13 xoritor dgbaley, you cant do a statefull firewall with flows
20:13 JoeJulian kampnerj: use fpaste.org
20:13 xoritor that is why they used iptables
20:13 dgbaley xoritor: meaning you can't with ovs?
20:13 xoritor right
20:14 dgbaley xoritor: ah okay, I know OpenFlow doesn't support that, (well in a way), but am not sure about ovs extensions
20:14 xoritor flows have no "state"
20:14 xoritor but they are awesome
20:15 JoeJulian Neutron uses a hybrid of iptables for stateful rules, and flow rules for traffic direction.
20:15 xoritor so state has to be done in iptables and really should be done in ipset
20:15 JoeJulian ipset's coming.
20:15 xoritor yea but neutron just sucks
20:16 JoeJulian I think it's mostly because people don't understand it and implement it poorly.
20:16 xoritor working with it is like stabbing yourself repeatedly in the junk
20:16 xoritor with a dull knife
20:17 PatNarciso xfs related: after removing an xfs partition, do ya think its necessary to dd zero-out an old xfs partition before expanding an xfs partition over it?
20:18 xoritor you shouldn't need to... mkfs.xfs -f over it and you should be good
20:18 dgbaley xoritor: all of openstack "just sucks", but its still the best option for a multi-tenant self-hosted iaas
20:19 xoritor dgbaley, mabye
20:19 dgbaley and like JoeJulian said, since its python I can easily manage a bunch of changes in puppet to smooth out some of the rough edges
20:19 xoritor dgbaley, lots of people are using cloudstack
20:19 xoritor yea
20:19 xoritor true
20:19 haomaiwa_ joined #gluster
20:20 dgbaley Isn't that Java as well?
20:20 xoritor there is that
20:20 xoritor i dont know
20:20 xoritor it was too dumbed down for me
20:20 kampnerj here is my error log: http://ur1.ca/n5din
20:20 xoritor i really dont care if things are java or python
20:21 xoritor elasticsearch is java
20:21 JoeJulian kampnerj: connection failed for server=butch.metarhythm.com port=24007
20:21 xoritor im going to have to run it somewhere
20:21 JoeJulian kampnerj: Also look in /var/log/glusterfs for a client log. That might give more details.
20:22 PatNarciso xoritor, its gonna be an xfs_grow.
20:22 JoeJulian The problem I have with java is that it's a black box. Debugging it is worse than trying to debug ruby.
20:22 PatNarciso xoritor, my only concern is if a recover needed to occur later... would it 'see' the old metadata.
20:22 xoritor JoeJulian, that my good sir... is a very valid point
20:23 xoritor PatNarciso, it may
20:23 xoritor PatNarciso, if your just growing it nothing is wiped
20:24 xoritor PatNarciso, what are you trying to achieve?
20:24 xoritor PatNarciso, do you want it seen as a new partition and filesystem?
20:25 xoritor PatNarciso, or would you rather keep everything as it is and just extend the space?
20:26 Romeor JoeJulian, ndevos is everything is ok with this message?
20:26 Romeor do you wait some time between creating the vm and firing it up/
20:26 PatNarciso attempting to achieve: I got 10 xfs paritions... that are 10 bricks in a single distributed volume... that I'm merging into a single partition.
20:26 Romeor not that message...
20:26 PatNarciso so, I copy/migrate data away from brick2... remove brick2... dd-zero-out parition 2... and then grow partition 1.
20:27 Romeor [2015-07-15 11:27:45.553952] I [dht-selfheal.c:1065:dht_sel​fheal_layout_new_directory] 0-HA-1TB-S14A4F-pve-dht: chunk size = 0xffffffff / 1032124 = 0x1041
20:27 Romeor [2015-07-15 11:27:45.554017] I [dht-selfheal.c:1103:dht_sel​fheal_layout_new_directory] 0-HA-1TB-S14A4F-pve-dht: assigning range size 0xfffb6ebc to HA-1TB-S14A4F-pve-replicate-0
20:27 Romeor [2015-07-15 11:27:45.555603] I [MSGID: 109036] [dht-common.c:6296:dht_log_n​ew_layout_for_dir_selfheal] 0-HA-1TB-S14A4F-pve-dht: Setting layout of /images/232 with [Subvol_name: HA-1TB-S14A4F-pve-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 ],
20:27 cholcombe what does /tmp/quotad.socket do?  Can you issue commands to that?
20:27 calavera joined #gluster
20:28 DV joined #gluster
20:29 PatNarciso at least, that has been my process as I attempt to reduce IO of my volume.
20:30 xoritor i dont think you need to dd
20:30 xoritor you may
20:31 PatNarciso yah?  that would be great.
20:31 PatNarciso yah?  that would not be great.
20:31 jmarley joined #gluster
20:31 xoritor i doubt it
20:31 PatNarciso I also doubt it.
20:31 xoritor i would say try it
20:32 xoritor when you grow the fs it will essentially remap the part it needs so the metadata of the old stuff will not be there any more
20:32 xoritor so you should not need the dd
20:33 kampnerj well, port 24007 is open
20:33 xoritor you could try this
20:33 PatNarciso if I were to write an fs, that is what I would do.
20:33 xoritor dd before the fs?
20:34 kampnerj and I can fuse mount it.
20:34 PatNarciso xoritor, sorry, didn't follow you on that last comment.
20:35 xoritor you would dd before you mkfs?
20:35 haomaiwa_ joined #gluster
20:37 PatNarciso I'm currently dd'n partition2, then gdisk'n the mount, resizing partition1 to have new end sector (that was old partition2 end sector) and then xfs_grow'ing partition1.
20:38 JoeJulian kampnerj: As a standard non-root user, can you telnet to 24007?
20:40 kampnerj yes
20:42 JoeJulian Then I'd check that client log next.
20:43 kampnerj on the client, i have /var/log/glusterfs/mnt-gluster.log,
20:43 kampnerj that's only for the fuse mount, right?
20:43 JoeJulian right
20:43 kampnerj there is no other client log in that directory.
20:44 kampnerj and I have the fuse mount unmounted right now
20:44 JoeJulian sure
20:45 kampnerj I can do this: qemu-img create gluster://server/gv0/images/test.img
20:46 kampnerj so the error is only when i do virsh start or start from virt-manager
20:47 kampnerj is it an selinux thing maybe?
20:47 JoeJulian could be, sure.
20:47 JoeJulian selinux does seem to have trouble keeping their ruleset up to the point where stuff works.
20:48 kampnerj alright, I'll try disabling.
20:49 JoeJulian btw... I don't know what you've got going on for this, but port 24007 is open to the world. I would firewall that appropriately.
20:49 haomaiwa_ joined #gluster
20:49 kampnerj ok, sure.
20:50 kampnerj just wanted to make sure it wasn't a firewall problem.
20:50 JoeJulian I understand. I just try to help keep an eye out for me peeps. ;)
20:50 kampnerj thanks.
21:12 DV joined #gluster
21:12 haomaiwa_ joined #gluster
21:16 Jampy joined #gluster
21:16 jmarley joined #gluster
21:16 kampnerj ok, no luck.   http://ur1.ca/n5e2c
21:19 JoeJulian Those lines look normal.
21:19 JoeJulian Even though they're error level, they're expected.
21:20 * JoeJulian grumbles about log levels being confusing.
21:21 kampnerj except it doesn't start.
21:21 kampnerj it's not the image cause I can start it if I fuse mount gluster
21:21 Romeor WEeeee<!!! I'm not alone! Jumpy has weird disk problems after clean install of d8 alsow
21:21 Romeor PARTY GUYS!
21:22 Romeor Jampy i mean
21:23 Romeor we had different setups with proxmox. he mounted Glusterfs via NFS, this way libgfapi is not working iiur ...
21:27 JoeJulian ~pasteinfo | kampnerj
21:27 glusterbot kampnerj: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
21:28 ndevos JoeJulian: ever tried http://termbin.com/ ?
21:28 Romeor oh hi ndevos
21:28 JoeJulian nice
21:28 * ndevos _o/ Romeor
21:29 * Romeor o_ ndevos
21:30 JoeJulian @forget paste
21:30 glusterbot JoeJulian: The operation succeeded.
21:30 kampnerj ok: http://ur1.ca/n5e6r
21:30 haomaiw__ joined #gluster
21:30 kampnerj tightened up the firewall by the way.  thanks.
21:31 JoeJulian @learn paste as For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
21:31 glusterbot JoeJulian: The operation succeeded.
21:31 ndevos Romeor: if you really have confirmed that virtio woth proxmox/debian-8 does not work, and ide/sata is a workaround, could you send an update to your emails on the lists?
21:32 Romeor i will send, and i've closed the bug
21:32 Romeor and Jampy  just confirmed the same situation with proxmox and glusterfs :) using 3.5.2
21:33 JoeJulian kampnerj: According to the documentation, "IMPORTANT : As of now(april 2,2014)there is a bug, as allow-insecure is not dynamically set on a volume.You need to restart the volume for the change to take effect."
21:33 JoeJulian kampnerj: have you tried that?
21:33 Romeor he just did the test install for me
21:33 ndevos Romeor: ah, good, so it really isnt a gluster issue in the end
21:33 kampnerj yes, I restarted both nodes.
21:33 Jampy why are you sure? everything works with Gluster/NFS
21:34 ndevos kampnerj: restarting both nodes is not the same as a "gluster volume stop $VOLNAME ; gluster volume start $VOLNAME"
21:34 Romeor ndevos, you don't even imagine how i feel after all of that. so much bothering wrong people
21:34 ndevos kampnerj: the configuration gets re-generated with a volume stop/start, not with a reboot
21:35 kampnerj ok, I did service glusterd restart
21:35 JoeJulian still not what was instructed.
21:35 ndevos kampnerj: that too, its different :)
21:35 Romeor ndevos, actually yes. if Jampy mounts glusterfs via NFS its working.. but this way there is no libgfapi between?
21:35 kampnerj OK.
21:35 haomaiwang joined #gluster
21:35 kampnerj on both nodes, or just on one?
21:36 ndevos Romeor: its not bothering if we find the problem and a solution, we just need to have it in the archives somewhere so that others can find it too
21:36 ndevos kampnerj: one is sufficient
21:37 ndevos Romeor: is gluster/nfs is used, gfapi is not involved
21:37 Romeor ok. so we've just got the root of the problem
21:37 Romeor so its gluster! :D
21:37 ndevos if nfs-ganesha is used, the communication is like: nfs-client (linux kernel) <-> nfs-ganesha + libgfapi <-> brick
21:38 cleong joined #gluster
21:38 Romeor Jampy has 2 setups atm (one is identical to mine) another when he mounts glusterfs using NFS. with first - result is mine. with his - everything works
21:38 kampnerj hurray! that fixed it.
21:39 Romeor sooooo.. /me starts thinking /me is in the right place with this problem actually
21:39 kampnerj thanks everyone!
21:39 ndevos kampnerj: good to hear :)
21:39 Romeor libgfapi is glusterfs's thing.. ain't it?
21:40 ndevos yes, libgfapi is the gluster client library that applications can use to "speak" the glusterfs protocol
21:40 Romeor so if one setup with it won't work and another without it works...
21:40 Romeor how can i help you to debug it further
21:40 Romeor :)
21:41 ndevos uhh, still, it only happens with Debian 8?
21:41 Romeor there is a problem between d8 virtio drivers and glusterfs libgfpau.
21:41 Romeor yes, only d8
21:42 ndevos thats rather awkward... I do not think qemu uses libgfapi different for different OS's
21:42 Jampy is qemu aware of gluster at all !?
21:44 ndevos if qemu is started with a "-disk gluster://server/volume/dir/vm.img" kind of url for the image, it will use libgfapi.so to talk to the bricks directly, no filesystem overhead on the hypervisor
21:44 ndevos see http://raobharata.wordpress.com/2012/10​/29/qemu-glusterfs-native-integration/ for a very nice description
21:44 Jampy FYI, network doesn't matter either. using 127.0.0.1 as server does still reproduce the problem (this time D8 isn't even able to boot at all - all kind of weird errors)
21:45 Romeor so how do we sum everything up?
21:45 Romeor is it d8 virtio drivers problem, proxmox qemu problem, or with minimal probability glusterfs libgfapi problem?
21:45 haomaiwang joined #gluster
21:45 ndevos well, the linux nfs-client may do some different caching than libgfapi does, it can change the access pattern that the bricks see
21:46 Romeor Jampy had a very funny result btw. after reboot linux thought hes on fat32, right Jampy ?
21:46 nsoffer joined #gluster
21:46 Romeor http://i.imgur.com/c4XSzzV.png
21:47 ndevos Romeor, Jampy: have you disabled write-behind and read-ahead on the gluster volume? if not, you may want to try that
21:47 * Romeor not
21:48 Jampy ndevos, I kept the defaults - and already removed the test-VMs... :-/
21:48 Romeor is volume restart is needed?
21:48 Jampy worth doing another test for that?
21:48 ndevos no, restart is not needed, those options only affect the client side
21:48 Romeor JamesG, i will do
21:49 Romeor i've got testing volume with xfs
21:49 Romeor i wanted to test anyway
21:49 Romeor Jampy, *
21:49 Romeor I hate xchat autocomplition
21:49 ndevos "worth doing" depends on you guys :) we already know that ide/sata instead of virtio works fine, right?
21:49 Romeor while using it for few days only
21:50 Romeor ndevos, i'll do :) it won't take  much time
21:50 Jampy ndevos, default is "no cache"
21:50 Romeor ndevos, gime me please the full option name
21:50 ndevos Jampy: "no cache" is the advise that is passed on the the filesystem I think, the filesystem "tries" to limit cache usage, not sure how much caching is still done
21:51 Romeor i'm on rdp+vnc solution, not very suitable to switch windows :(
21:51 Romeor if to be honest ssh + rdp + ssh + vnc
21:51 Romeor :D
21:51 ndevos Romeor: gluster volume set <VOLNAME> performance.write-behind off
21:52 ndevos Romeor: gluster volume set <VOLNAME> enable/disable off
21:52 Romeor 100x tnx
21:52 ndevos uh, not that
21:52 ndevos Romeor: gluster volume set <VOLNAME> performance.read-ahead off
21:53 Romeor i'll do two tests (as i never used fine tuned xfs before as gfs underlying fs): 1st. usual install without option, 2nd: add these
21:53 Romeor ndevos, how long you'll be online?
21:54 ndevos Romeor: maybe 1 or 2 minutes :)
21:54 ndevos Romeor: you can email the results as a reply to your email on gluster-devel, I'll probably read mails during the weekend anyway
21:55 Jampy ndevos, if I'm not asking too much, could you have a quick look at http://pastebin.com/ZzRuLucf please? I can't find any 7eaaf063-5357-4c57-b560-bb2ae815b9b3 file on any of the bricks..
21:55 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:56 Romeor ndevos, ok :)
21:58 ndevos Jampy: okay, so 7eaaf063-5357-4c57-b560-bb2ae815b9b3 would be a directory, and an nfs-client tries to use the file-handle that points to that directory
21:59 ndevos Jampy: a filehandled from gluster/nfs contains the volume-id + gfid, a lookup for a file called "966" in the directory pointed by the file-handle can not be resolved
21:59 JoeJulian No, 7eaaf063-5357-4c57-b560-bb2ae815b9b3 is a gfid.
22:00 JoeJulian That's the effective inode within a gluster volume.
22:00 ndevos Jampy: yes, a gfid of a directory, the action is done on <gfid:7eaaf063-5357-4c57-b560-bb2ae815b9b3>/966
22:01 JoeJulian 7eaaf063-5357-4c57-b560-bb2ae815b9b3 represents a directory under which the file 966 should exist, but either the directory with that gfid doesn't exist, or that file doesn't exist.
22:01 edong23 joined #gluster
22:01 Jampy what should I do?
22:01 JoeJulian Stop trying to lock 966?
22:02 Jampy No clue what is trying to lock...
22:02 Jampy probably one of the Proxmox CTs that stopped working..
22:02 ndevos Jampy: the client that tries to access that dir/file should get a -ESTALE error, and it should not try to use that dir/file, an unmount/mount should fix it
22:02 JoeJulian basically, nfs is trying to do what your application is telling it to do. Since it can't, it's logging a warning.
22:03 * ndevos leaves for the day, have a good weekend all!
22:03 JoeJulian You too!
22:03 Jampy ndevos: thanks and have a good time =)
22:04 Jampy JoeJulian: how can I find out what file/directory is that exactly?
22:05 JoeJulian The file has been mentioned several times so far. As for the directory... ,,(resolver)
22:05 glusterbot I do not know about 'resolver', but I do know about these similar topics: 'gfid resolver'
22:05 JoeJulian @gfid resolver
22:05 glusterbot JoeJulian: https://gist.github.com/4392640
22:06 JoeJulian My suspicion is that the directory was deleted by something, but there's an application with that directory as it's pwd that's still trying to use it to write a file (966).
22:06 JoeJulian An lsof on your clients might find it.
22:08 Jampy 7eaaf063-5357-4c57-b560-bb2ae815b9b3    ==      File:   ls: cannot access /data/gluster/systems//.glusterfs/7e/aa​/7eaaf063-5357-4c57-b560-bb2ae815b9b3: No such file or directory
22:08 Jampy find: invalid argument `!' to `-inum'
22:08 Romeor Jampy, if you would ask me, i would not recommend using gfs for VZ containers (and containers also, btw)
22:09 Romeor VZ containers seem to be running fine only on local storage which is mounted to /var/lib/vz only.. i had so many problems, when tried to move them to another place...
22:09 Jampy Romeor: as said, NFS is faster for small files...
22:11 * Romeor now sticks only to qemu
22:11 Jampy Romeor: yeah, like unix sockets that don't like migration.... I'm actually thinking to avoid CTs at all (but still use them for non-HA CTs using local storage)
22:11 Romeor and in proxmox 4 there is no more openvz
22:11 Jampy JoeJulian: the resolver script fails for me (see above)
22:12 Jampy Romeor: but LXC, right?
22:12 haomaiw__ joined #gluster
22:12 Romeor so i'll definitely move all remained VZs to qemu.. thanx God it is as simple as 1-2-3
22:12 Romeor yep lxc. which transforms vz to lxc via dump-restore process..
22:13 Romeor but still.. not a problem to transform vz to qemu and live without nightmares :)
22:14 Jampy Romeor: I still want to use OpenVZ for MySQL/XtraDB containers that will remain on one phys. machine forever (1 CT per node) and use local storage. Using a MySQL multi-master setup the HA is implemented by the database itself and so I have max performance...... in theory at least
22:14 haomaiwang joined #gluster
22:15 Romeor there is almost no performance drop if we compare vz and qemu... afaik
22:15 Romeor so you could have your qemu vm instad on vz on local storage :)
22:16 Romeor some 3-5 years ago there really was a drop. not now
22:16 Romeor 1-2% difference compared to host system
22:16 Jampy that's contrary to everything I heard so far =) for any VM there is added I/O complexity whereas CTs offer near-native performance
22:18 Jampy Romeor: Debian8 aside, are you comfortable using Gluster as shared storage for qemu? my little problems like the one I'm currently trying to solve make me worry if one day I'll loose a VM disk image because gluster has some... problems
22:18 plarsen joined #gluster
22:20 nage joined #gluster
22:23 Romeor i had no problems until this d8 now with qemu. and i tried drbd too. also some other admins and tech guys in our company wanted ceph.. i was alone agaist two and win. so i can say i am very comfortable with gfs.
22:23 Jampy JoeJulian: `find /data -name "*f063-5357-4c57-b560-bb2ae815b9b3"` shows no result on any brick ("7eaaf063-5357-4c57-b560-bb2ae815b9b3" would be the GFID). Isn't that strange?
22:23 * Jampy reboots all nodes, one after another
22:23 Romeor gfs is the last thing in my setup i'm warried about
22:24 Jampy Romeor: good to hear =)
22:24 Romeor Jampy, yes, the windows style!
22:24 Romeor i like to reset reset
22:24 Romeor :)) i do that pretty often to linux too
22:25 Romeor meanwhile i'm almost sure you won't have problems after reboot
22:25 Romeor its due to some vz BS
22:25 Romeor socket here socket there and you can't find it anywhere
22:26 Jampy Romeor: you give your VMs just 512 MB RAM?
22:26 Romeor if i know i do not need more
22:26 Romeor i even give sometimes 256 and 384
22:27 Jampy hum, first node has problems shutting down because NFS server (gluster) is "not responding".... ough. sounds like that node was the culprit
22:27 Romeor one of mail servers runs on 386, while only 123 is used
22:27 Jampy Romeor: no downsides with that? what are your VMs doing?
22:28 Romeor 384*
22:28 Jampy Romeor: no disk cache?
22:28 Romeor nope
22:28 Romeor its like with real HW. if you know u don't need a lot ram for samba, why u install 16GB there
22:30 Jampy hmmmm. hypervisor kernel crashed due to NFS problems :/
22:31 Jampy Romeor: hmm, will give it a try and reduce ram
22:34 Romeor i've got mail servers, private httpd servers, public httpd server with not that much www domains and clicks, nagios (this one has 2GB), redmine, pppoe server, openvpn, ns, streaming servers (vlc),  owncloud (4GB here), zarafa (6GB here), low used mysql servers (1gb each), few simple desktops with mate and vnc including this one (4gb each), munin, cacti ( 1 gb each), 1 win server with 4 gb
22:35 calavera joined #gluster
22:35 Romeor ftp servers
22:35 JoeJulian Jampy: I was correct, then. The directory was, indeed, deleted. The problem is with an application specific to your use case. You'll have to figure out which application. If you need to figure out which client, you'll have to look at nfs traffic with wireshark.
22:36 Romeor and as i started to implement gfs not that far ago, only 50% of vms is on gfs
22:36 Romeor and only qemu
22:38 Romeor if we talk about vz, i've got some with 128 ram runing recent distros :D
22:39 Romeor Jampy, https://en.wikipedia.org/wiki/Comparis​on_of_platform_virtualization_software look at the second table and search for kvm
22:39 Romeor its up to near native compared to host
22:40 Romeor WHAAA!!
22:40 Romeor tnx
22:41 Romeor so "the vz, the vz, the vz is on fire! we don't need no water let the MF burn... burn MF, burn..
22:42 Jampy Romeor: will have a look. thanks
22:42 * Jampy goes to sleep
22:42 Jampy bye all!
22:44 * Romeor cherishes the +v (my precious)
22:48 Romeor hah. just came in mind.. i'm officially on vacation now... rofl.
22:50 Romeor last time i used irc was... CS1.6 time, when i was 14-18 ... quakenet.
23:00 doekia joined #gluster
23:02 adzmely joined #gluster
23:07 glusterbot News from newglusterbugs: [Bug 1244361] Command "cp *" to directory crosses hard-limit quota and copies all requested files <https://bugzilla.redhat.co​m/show_bug.cgi?id=1244361>
23:11 haomaiwang joined #gluster
23:16 haomaiw__ joined #gluster
23:18 haomaiwang joined #gluster
23:38 haomaiwa_ joined #gluster
23:41 haomaiwang joined #gluster
23:47 * PatNarciso has the quad.
23:50 haomaiw__ joined #gluster
23:54 ank joined #gluster
23:54 haomaiwang joined #gluster
23:54 Telsin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary