Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 gildub joined #gluster
00:53 recidive joined #gluster
00:58 vimal joined #gluster
01:03 sputnik13 joined #gluster
01:13 bala joined #gluster
01:19 lyang0 joined #gluster
01:27 recidive joined #gluster
01:51 haomaiwa_ joined #gluster
01:54 suliba joined #gluster
01:54 haomaiw__ joined #gluster
02:03 gildub joined #gluster
02:15 nullck joined #gluster
02:41 nullck joined #gluster
02:46 dusmant joined #gluster
02:53 bharata-rao joined #gluster
02:56 sputnik13 joined #gluster
03:00 haomaiwa_ joined #gluster
03:01 haomaiw__ joined #gluster
03:01 nullck joined #gluster
03:03 kanagaraj joined #gluster
03:42 spandit joined #gluster
03:48 bala joined #gluster
03:50 kshlm joined #gluster
03:53 RameshN joined #gluster
03:54 itisravi joined #gluster
04:00 recidive joined #gluster
04:14 ppai joined #gluster
04:14 sahina joined #gluster
04:19 RameshN joined #gluster
04:21 nshaikh joined #gluster
04:24 nishanth joined #gluster
04:25 itisravi joined #gluster
04:27 karnan joined #gluster
04:27 ndarshan joined #gluster
04:35 kdhananjay joined #gluster
04:38 Rafi_kc joined #gluster
04:41 sahina joined #gluster
04:41 lalatenduM joined #gluster
04:42 nbalachandran joined #gluster
04:43 anoopcs joined #gluster
04:47 hchiramm joined #gluster
04:50 ramteid joined #gluster
04:57 anoopcs joined #gluster
04:58 jiffin joined #gluster
04:58 sputnik13 joined #gluster
05:03 hagarth joined #gluster
05:13 RameshN joined #gluster
05:15 bala joined #gluster
05:30 atalur joined #gluster
05:38 spandit joined #gluster
05:40 dusmant joined #gluster
06:00 rgustafs joined #gluster
06:00 mariusp joined #gluster
06:01 atinmu joined #gluster
06:04 dusmant joined #gluster
06:04 RameshN joined #gluster
06:04 meghanam joined #gluster
06:04 meghanam_ joined #gluster
06:17 karnan joined #gluster
06:29 bala joined #gluster
06:31 ricky-ti1 joined #gluster
06:33 LebedevRI joined #gluster
06:34 kanagaraj joined #gluster
06:45 atalur joined #gluster
06:50 kanagaraj joined #gluster
06:52 recidive joined #gluster
07:03 shylesh__ joined #gluster
07:04 ekuric joined #gluster
07:05 ekuric joined #gluster
07:08 ctria joined #gluster
07:08 raghu` joined #gluster
07:08 ppai joined #gluster
07:10 rastar joined #gluster
07:14 meghanam_ joined #gluster
07:14 meghanam joined #gluster
07:21 dusmant joined #gluster
07:23 sputnik13 joined #gluster
07:23 nbalachandran joined #gluster
07:26 hagarth joined #gluster
07:30 glusterbot New news from newglusterbugs: [Bug 1130888] Renaming file while rebalance is in progress causes data loss <https://bugzilla.redhat.com/show_bug.cgi?id=1130888>
07:35 sputnik13 joined #gluster
07:35 rastar joined #gluster
07:45 DV_ joined #gluster
07:48 harish_ joined #gluster
07:49 hybrid512 joined #gluster
07:50 andreask joined #gluster
07:51 fsimonce joined #gluster
07:52 nbalachandran joined #gluster
07:52 sputnik13 joined #gluster
07:54 ndarshan joined #gluster
07:55 ppai joined #gluster
07:59 liquidat joined #gluster
08:00 rolfb joined #gluster
08:01 aravindavk joined #gluster
08:04 dusmant joined #gluster
08:05 hagarth joined #gluster
08:15 karnan joined #gluster
08:21 ndarshan joined #gluster
08:29 Norky joined #gluster
08:32 meghanam_ joined #gluster
08:35 meghanam joined #gluster
08:42 saurabh joined #gluster
08:47 nishanth joined #gluster
08:48 bala joined #gluster
08:49 dusmantkp_ joined #gluster
08:50 coreping left #gluster
08:51 ppai joined #gluster
08:51 itisravi_ joined #gluster
08:54 imad_VI joined #gluster
08:57 imad_VI Hi everyone, I am setting a shared volume with replica between two nodes. On the fisrt I want ot add 2 bricks. First node /mnt/share1 and /mnt/share2 on the second /mnt/share. Will the second nodes respect the file system tree from the first node ?
09:00 imad_VI I'm not sure if it's clear enough :)
09:08 atinmu joined #gluster
09:18 edward1 joined #gluster
09:20 ctria joined #gluster
09:21 rastar joined #gluster
09:21 RameshN joined #gluster
09:22 atalur joined #gluster
09:23 sahina joined #gluster
09:28 kanagaraj_ joined #gluster
09:29 meghanam joined #gluster
09:29 tryggvil joined #gluster
09:29 meghanam_ joined #gluster
09:32 hchiramm joined #gluster
09:32 kanagaraj joined #gluster
09:36 Norky so two bricks on one node and one brick on another?
09:43 RameshN joined #gluster
09:47 imad_VI Norky: Yes that's it
09:57 Norky well it'd have to be a three-way replica then
09:57 Norky with bricks A and B on node 1 replicating to each other - there's not much point to doing that
09:58 Slashman joined #gluster
10:00 kanagaraj joined #gluster
10:08 Pupeno joined #gluster
10:11 suliba joined #gluster
10:21 ndevos imad_VI, Norky: well, a plain dht (non-replicate) would do it too, the directory tree is replicated to all distributed bricks, just the files/contents is not
10:24 kanagaraj joined #gluster
10:24 Norky yeah, true
10:25 Norky are the bricks the same size, imad_VI ?
10:26 ppai joined #gluster
10:26 imad_VI the briccks on node1 are the same size. And the sum of them is equal to the brick on second node
10:27 dusmantkp_ joined #gluster
10:32 fengkun02 joined #gluster
10:32 qdk joined #gluster
10:33 fengkun02 [2014-08-18 10:33:05.591389] E [name.c:147:client_fill_address_family] 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
10:33 fengkun02 who can help me?
10:34 fengkun02 [2014-08-18 10:33:26.592802] W [dict.c:1055:data_to_str] (-->/usr/lib64/glusterfs/3.5.1/rpc-transport/socket.so(+0x68ec) [0x7f892b5108ec] (-->/usr/lib64/glusterfs/3.5.1/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0xad) [0x7f892b514fcd] (-->/usr/lib64/glusterfs/3.5.1/rpc-transport/socket.so(client_fill_address_family+0x20b) [0x7f892b514e8b]))) 0-dict: data is NULL
10:34 glusterbot fengkun02: ('s karma is now -20
10:34 glusterbot fengkun02: ('s karma is now -21
10:34 glusterbot fengkun02: ('s karma is now -22
10:35 fengkun02 what's wrong?
10:35 ctria joined #gluster
10:55 atinmu joined #gluster
10:56 kanagaraj_ joined #gluster
11:01 gildub joined #gluster
11:06 d-fence joined #gluster
11:18 ppai joined #gluster
11:21 bala joined #gluster
11:29 Slashman joined #gluster
11:32 ws2k3 Hello, does glusterfs still have a web interface? i just noticed that older version of glusterfs has a web interface is this still useable
11:33 siXy do you mean ovirt?
11:34 fengkun02 [2014-08-18 10:33:05.591389] E [name.c:147:client_fill_address_family] 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
11:35 fengkun02 who can help me?
11:35 fengkun02 when i execute replcae-brick
11:35 dusmantkp_ joined #gluster
11:38 sahina joined #gluster
11:41 calum_ joined #gluster
11:41 frag_work joined #gluster
11:42 giannello joined #gluster
11:42 ndarshan joined #gluster
11:42 abyss__ fengkun02: which version of gluster?
11:42 fengkun02 3.5.1
11:43 fengkun02 [2014-08-18 11:27:46.815923] W [dict.c:1055:data_to_str] (-->/usr/lib64/glusterfs/3.5.1/rpc-transport/socket.so(+0x68ec) [0x7ffafda8d8ec] (-->/usr/lib64/glusterfs/3.5.1/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0xad) [0x7ffafda91fcd] (-->/usr/lib64/glusterfs/3.5.1/rpc-transport/socket.so(client_fill_address_family+0x20b) [0x7ffafda91e8b]))) 0-dict: data is NULL
11:43 glusterbot fengkun02: ('s karma is now -23
11:43 glusterbot fengkun02: ('s karma is now -24
11:43 glusterbot fengkun02: ('s karma is now -25
11:48 karnan joined #gluster
11:51 ws2k3 can anyone explain me the difference between stripe and distributed?
11:52 abyss__ fengkun02: hmmm, I think it's a bug. I had something similar but in 3.3 gluster and there was no solution to repair this issue and I had to set up again cluster... If you check bugzilla for glusterfs then you will find this issue even for glusterfs 3.4... So it's possible that bugs is still not resolved :/
11:53 abyss__ one way or another I don't using replace-brick during migration anymore;)
11:53 hchiramm fengkun02, not sure whats happening , while we dig more , can u please create a Bugzilla for the same
11:53 frag_work left #gluster
11:55 abyss__ ws2k3: please refer to documentation on gluster.org. There is excellent explanation.
11:55 ws2k3 abyss__ i already red the website a couple of times but i have a few questions that i was unable to find an answer for
11:56 fengkun02 Bug 1131001   , I have just created
11:56 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1131001 unspecified, unspecified, ---, vbellur, NEW , 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
11:56 hchiramm fengkun02++ thanks
11:56 glusterbot hchiramm: fengkun02's karma is now 1
11:56 abyss__ ws2k3: aks then
11:56 abyss__ *ask
11:57 ws2k3 i am unable to find out what is the difference betwee stripe and distribute
11:58 hchiramm when creating bz for GlusterFS please select the product as "GlusterFs"
11:58 hchiramm any way I am changing it for onw
11:58 hchiramm onw/now
11:59 fengkun02 when we need replace-brick,  no way...
12:00 fengkun02 first remove-brick, then  add-nrick ,like this
12:01 kanagaraj__ joined #gluster
12:01 glusterbot New news from newglusterbugs: [Bug 1131010] Invalid SSL connection on brick port make the brick disconnect <https://bugzilla.redhat.com/show_bug.cgi?id=1131010> || [Bug 1131001] 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options <https://bugzilla.redhat.com/show_bug.cgi?id=1131001>
12:02 Slashman_ joined #gluster
12:02 lalatenduM joined #gluster
12:06 abyss__ ws2k3: if you red the documentation and you still not understanding the different then don't use stripe:) Stripe shortly is one file on each brick like(so, many parts of the same file is on each bricks). Distributed - files are spread randomly across the bricks something like raid0. None of thi
12:07 hchiramm fengkun02, it looks like replace-brick start is not supported
12:07 hchiramm in replace-brick, commit force is the only operation that is supported
12:08 ws2k3 abyss__ if i need performance and relayablely should i use distributed replicated or striped replicated, dont know how how safe striped replicated is when one server goes down?
12:08 hchiramm fengkun02, I am confirming further
12:09 ws2k3 i think if i would make a striped replicated with a copy of server1 disk 1 onto server2 disk 2 and a copy of server2 disk1 onto server1 disk2 then one server can go down and it still should be working right>
12:15 abyss__ ws2k3: sorry I can speak now (working). But please refer to documentation, they even indicate which is better for perf etc.
12:18 ctria joined #gluster
12:21 tyrok_laptop left #gluster
12:34 hchiramm joined #gluster
12:35 firemanxbr joined #gluster
12:37 kanagaraj joined #gluster
12:41 julim joined #gluster
12:47 chirino joined #gluster
12:47 bashtoni joined #gluster
12:50 bashtoni I've resized the bricks backing a gluster volume; how can I resize the volume itself?
12:51 ws2k3 bastoni i'm realy new to gluster so dont on anything but did you already did a rebalace?
12:52 bashtoni Nope
12:52 ws2k3 i guess that that can help
12:53 B21956 joined #gluster
12:53 bashtoni Nope, 'is not a distribute volume'
12:53 bashtoni It's just a replica set
12:54 chirino_m joined #gluster
12:54 ws2k3 hmm did you already try to write a file to? cause it can trigger some glusterfs actions/checks as far as i know the gluster is as big as the smallest brick
13:02 bennyturns joined #gluster
13:02 kdhananjay left #gluster
13:03 hagarth joined #gluster
13:04 altmariusp joined #gluster
13:13 nbalachandran joined #gluster
13:15 bashtoni OK, so the issue was I forgot to run xfs_growfs, glusterfs did everything automatically once I actually resized the volume.. :)
13:18 tdasilva joined #gluster
13:21 recidive joined #gluster
13:21 ctria joined #gluster
13:22 sauce joined #gluster
13:24 dusmantkp_ joined #gluster
13:28 glusterbot New news from resolvedglusterbugs: [Bug 1031166] Can't mount a GlusterFS NFS export if you include a directory in the source field on Tru64 (V4.0 1229 alpha) mount <https://bugzilla.redhat.com/show_bug.cgi?id=1031166>
13:31 partner uuh, broken logrotate still in debian for 3.4.5, for example client (+common) are unable to rotate. there is i working version in extras-directory, not sure why its not used, maybe just forgotten to fix, trying to search for possible existing bug report
13:31 theron joined #gluster
13:33 theron_ joined #gluster
13:43 kanagaraj joined #gluster
13:48 partner commented on the existing ticket which seems to originate from 3.3.1
13:51 aravindavk joined #gluster
13:52 bala joined #gluster
13:55 tru_tru joined #gluster
13:57 dusmantkp_ joined #gluster
13:58 glusterbot New news from resolvedglusterbugs: [Bug 1031164] After mounting a GlusterFS NFS export intial cd'ing into directories on a Tru64 resaults in a Permission Denied error <https://bugzilla.redhat.com/show_bug.cgi?id=1031164>
14:02 glusterbot New news from newglusterbugs: [Bug 949706] Log-rotate needs to hup <https://bugzilla.redhat.com/show_bug.cgi?id=949706>
14:06 suliba_ joined #gluster
14:06 partner_ joined #gluster
14:08 jiffin1 joined #gluster
14:13 gts joined #gluster
14:14 coredumb joined #gluster
14:17 ctria joined #gluster
14:17 ctria joined #gluster
14:18 gehaxelt joined #gluster
14:19 imad_VI Is it possible to replicate 2 brick on one other ?
14:19 jobewan joined #gluster
14:20 wushudoin joined #gluster
14:21 imad_VI I have two disks that I need to keep separate on the first srv, but I also need to replicate them on one brick in another srv.
14:28 nbalachandran joined #gluster
14:28 ira joined #gluster
14:29 kkeithley imad_VI: you can have more than one brick per server.
14:30 imad_VI kkeithley:Ok is it it possible to replicate 2 bricks on 1 brick ?
14:30 kkeithley no
14:35 gehaxelt joined #gluster
14:35 ira joined #gluster
14:36 ctria joined #gluster
14:40 recidive joined #gluster
14:41 gehaxelt joined #gluster
14:42 coredump joined #gluster
14:44 vimal joined #gluster
14:49 nage joined #gluster
14:56 itisravi_ joined #gluster
14:56 kanagaraj joined #gluster
15:04 julim joined #gluster
15:09 marcoceppi joined #gluster
15:09 marcoceppi joined #gluster
15:12 daMaestro joined #gluster
15:15 _dist joined #gluster
15:17 _dist JoeJulian: since only 3.5.2 appears to fix the healing issue was that 3.4.5 bug you linked me the fix for it? Do you hav eany specific concerns in running live on 3.5.2?
15:29 screamingbanshee joined #gluster
15:32 kanagaraj_ joined #gluster
15:36 _Bryan_ joined #gluster
15:38 sputnik13 joined #gluster
15:39 vimal joined #gluster
15:43 doo joined #gluster
15:45 bala joined #gluster
15:51 Pupeno_ joined #gluster
16:02 hagarth joined #gluster
16:11 dtrainor joined #gluster
16:13 PeterA joined #gluster
16:21 gmcwhistler joined #gluster
16:21 bala joined #gluster
16:22 zerick joined #gluster
16:28 lmickh joined #gluster
16:51 bala joined #gluster
16:52 recidive joined #gluster
16:52 mariusp joined #gluster
16:53 Humble joined #gluster
17:01 theron joined #gluster
17:03 theron_ joined #gluster
17:14 plarsen joined #gluster
17:17 recidive joined #gluster
17:28 daMaestro joined #gluster
17:29 rotbeard joined #gluster
17:31 semiosis _dist: new packages are in the gluster/qemu-glusterfs PPAs for qemu & libvirt.  /cc JoeJulian
17:31 semiosis in other news, lightning fried my cable modem and firewall friday!  just got things back online now :)
17:39 _dist semiosis: awesome, I'm working on debs right now for debian. 14.04 has an annoying grub bug with mdadm so for physical machines at least I'm sticking to wheezy
17:40 _dist but, I am curious if there are reasons I shouldn't use 3.5x? JoeJulian was saying he'd prefer 3.4.5 + a patch, but I'm testing 3.5.2 right now
17:40 * semiosis wouldn't know
17:41 _dist np, 3.5.2 fixes my constant healing issue (so I'm super happy about that)
17:42 semiosis thats great
17:46 _dist seems like most gluster packages are still missing the api src directory, so compiles of qemu/libvirt require source download, not a big deal maybe it's not an accident
17:46 _dist most = some (should have said)
17:49 semiosis _dist: i can't do anythign about "most packages" but if you could be more specific, maybe i can help with that
17:50 semiosis which packages, specifically?
17:50 mariusp joined #gluster
17:51 _dist off the gluster site itself, 3.5.2 wheezy deb, and the last time I used ubuntu ppa for 12.04lts (on gluster.org). It's not really a big deal though, few people compile qemu/libvirt I assume
17:54 recidive joined #gluster
17:54 semiosis _dist: i see the problem
17:54 semiosis i really thought i took care of that months ago!
17:55 semiosis i'll have a new package up there today
17:55 daMaestro joined #gluster
17:56 semiosis the 3.4.5 deb has it, but not the 3.5.2
17:56 semiosis woops
17:57 _dist cool, also the place where it is in the tar (/api/src/*) qemu actually looks in /prefix/include/glusterfs/api/) not api/src
17:57 _dist that might confuse some people getting compile errors
17:59 semiosis i hope anyone compiling from source can work that out for themselves
17:59 semiosis if not, they should come here and ask for help
17:59 _dist yeah, honestly if they can't they probably shouldn't be compiling :)
18:00 ramteid joined #gluster
18:05 PeterA anyone use cachefilesd with native glusterfs mount?
18:06 PeterA ubuntu cachefilesd for glusterfs client
18:07 PeterA not nfs
18:13 recidive joined #gluster
18:16 ndevos PeterA: that functionality does not exist yet :-/
18:16 PeterA would that be on a feature req? :)
18:16 PeterA would love to have that
18:16 PeterA would be great to have that
18:17 ndevos there is a feature request for it already, and I think we have somone that will be working on it as a university project, see the latest gluster-devel mails
18:17 theron joined #gluster
18:18 ndevos PeterA: Vimal more or less mentioned to me that he is very interested in the fs-cache project, http://supercolony.gluster.org/pipermail/gluster-devel/2014-August/041957.html
18:18 glusterbot Title: [Gluster-devel] GlusterFS new-features/project ideas (at supercolony.gluster.org)
18:18 * ndevos goes afk again
18:22 getup- joined #gluster
18:25 recidive joined #gluster
18:35 _dist semiosis: oh you made libvirt part of your qemu, I'm ok with that I assume most people would want it that way
18:36 B21956 joined #gluster
18:39 _dist btw, thanks as well. Now I assume my next road-block will be finding a decent (lean) gui for libgfapi (which I honestly don't think exists?)
18:42 n7777 joined #gluster
18:48 cfeller joined #gluster
18:51 DV__ joined #gluster
18:55 cfeller is there a reason that http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/ isn't pointing at 3.5.2 yet?
18:55 glusterbot Title: Index of /pub/gluster/glusterfs/3.5/LATEST (at download.gluster.org)
18:58 _dist cfeller: I'm not sure but I am testing 3.5.2 right now for libgfapi. I would suspect it's not latest stable yet
19:05 semiosis cfeller: fixed LATEST symlink
19:05 semiosis thx for pointing that out
19:08 cfeller sure thing
19:08 cfeller thanks for fixing it.
19:08 semiosis yw
19:10 recidive joined #gluster
19:15 PeterA when i restart glusterfs-server i got the gluster volume status showing the port N/A
19:15 PeterA http://pastie.org/9483895
19:15 glusterbot Title: #9483895 - Pastie (at pastie.org)
19:17 PeterA noticed these warnings on bricks
19:17 PeterA http://pastie.org/9483900
19:17 glusterbot Title: #9483900 - Pastie (at pastie.org)
19:19 PeterA i had to stop the glusterfs-server and pkill -f glusterfs
19:19 PeterA and restart the service
19:20 bene2 joined #gluster
19:25 Jamoflaw joined #gluster
19:25 andreask joined #gluster
19:28 sputnik13 joined #gluster
19:41 Pupeno joined #gluster
19:43 _dist semiosis: I assume that virt-manager is the only "decent" ui that is simple to setup/use with libvirt. But the trusty version is old and doesn't support libgfapi, I'm honestly not sure if the new version handles that storage domain well either
19:44 _dist I'll do some testing, perhaps if it turns out to work (the 1.0.1 version) it might be worth including
19:46 semiosis 30 seconds of googling & i found this: http://www.ubuntuupdates.org/package/getdeb_apps/trusty/apps/getdeb/virt-manager
19:46 glusterbot Title: UbuntuUpdates - Package "virt-manager" (trusty 14.04) (at www.ubuntuupdates.org)
19:46 semiosis obvs i cant endorse those packages, but might be worth looking into
19:48 Pupeno_ joined #gluster
19:48 semiosis there's probably others too.  i'm sure you're not the first person to need a newer virt-manager on trusty
19:50 _dist that's true, but almost no-one compiles with gluster support
19:51 semiosis does virt-manager need gluster?
19:52 _dist I do not beleive so, but most people package it with libvirt (like you did with qemu) so it would clobber your libvirt stuff
19:55 coredump|br joined #gluster
19:56 xoritor joined #gluster
19:57 cfeller_ joined #gluster
19:57 edong23 joined #gluster
19:57 osiekhan4 joined #gluster
19:57 tru_tru_ joined #gluster
19:57 HuleB joined #gluster
19:57 doekia_ joined #gluster
19:57 SteveCoo1ing joined #gluster
19:59 muhh joined #gluster
20:00 _dist I suppose if I hunt down a version of virt-manager that was compiled over libvirt 1.2.2 (that you used) that should work
20:01 semiosis i see
20:01 theron joined #gluster
20:01 semiosis i simply used the latest version of libvirt available in the distro repo
20:03 _dist yeap and that's perfect, it's just that without a build meant to go with your libvirt it will be some amount of trouble for someone to get virt-manager (without compiling their own) to work with your gluster compiles of qemu & libvirt
20:03 _dist honestly though, I haven't confirmed that the newest version of virt-manager can even work with libgfapi
20:03 _dist it using setup.py instead of configure, and I'm currently fighting with that
20:04 HoloIRCUser joined #gluster
20:06 Jamoflaw joined #gluster
20:18 xoritor does anyone know the xml to get bd-xlator to work with libvirt 1.2.7?
20:18 xoritor or rather 1.2.x ;-)
20:19 daMaestro joined #gluster
20:21 bennyturns joined #gluster
20:24 _dist semiosis: while performing my compile it looks like it might also be useful to have libvirt-dev in your ppa
20:24 recidive joined #gluster
20:25 semiosis _dist: it's there, i'm looking at it
20:26 _dist ok, sorry about that I didn't look where it installed it from (bad assumption)
20:26 semiosis np
20:32 clutchk1 joined #gluster
20:33 clutchk1 Hey all, rookie question. Can you do a rebalance while the volume is active? Is it safe?
20:37 semiosis clutchk1: the intention is for you to be able to do that, however there have been some bugs which got in the way over the years.  what version of glusterfs are you using?
20:39 clutchk1 glusterfs-3.4.0-1.el6.x86_64
20:39 semiosis probably affected by the recent bug
20:39 semiosis fixed in 3.4.5
20:40 clutchk1 ok good to know. Thanks!
20:40 semiosis however even without bugs, rebalance is expensive, and will probably add significant load to your cluster for a while, depending on how big the volume is
20:43 plarsen joined #gluster
20:47 clutchk1 indeed, thanks very much semiosis.
20:48 semiosis yw
20:55 _dist semiosis: I've confirmed new version of virt-manger can properly manage libgfapi
20:55 _dist so I'll try my first test deploy
21:01 _dist something is still off, I'll work with CLI first to see if it's libvirt or not
21:01 dare333 joined #gluster
21:02 dare333 hello
21:02 glusterbot dare333: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:03 dare333 I am having a problem with replace-brick. I have replaced one brick successfully of a pair. The second one says that it is "completed" as a status. But when I try to commit it, it then doesn't do anything but drop back to the cli after about 2-5 min. The same thing for trying to abort it. It seems to be stuck.
21:03 glusterbot New news from newglusterbugs: [Bug 1131271] Lock replies use wrong source IP if client access server via 2 different virtual IPs [patch attached] <https://bugzilla.redhat.com/show_bug.cgi?id=1131271> || [Bug 1131275] I currently have no idea what rfc.sh is doing during at any specific moment <https://bugzilla.redhat.com/show_bug.cgi?id=1131275>
21:04 semiosis dare333: glusterfs version?  distro versoin?
21:04 semiosis also might be helpful to put your etc-glusterfs-glusterd.log on pastie.org
21:04 dare333 glusterfs 3.4.1-3.el6_x86_64  on CentOS 6
21:05 ws2k3 Hello, does glusterfs still have a web interface? i just noticed that older version of glusterfs has a web interface is this still useablea
21:05 semiosis ws2k3: no, that's long gone.  you might find ,,(ovirt) helpful though
21:05 glusterbot ws2k3: http://wiki.ovirt.org/wiki/Features/Gluster_Support
21:06 ws2k3 ah so red hat removed the web interface from gluster and added it to ovirt?
21:06 dare333 semiosis: http://pastebin.com/Cx9S0mAD
21:06 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:07 semiosis ws2k3: i dont think there was a direct link like that....
21:07 semiosis afaik they are unrelated projects... the old gluster gui was discontinued, then ovirt added gluster support
21:07 xoritor anyone have any idea how to use a bd-xlator device with libvirt and not use fuse?  is it possible?  ie... is there libgfapi support for bd-xlator logical volumes yet?
21:07 dare333 http://pastie.org/9484120
21:07 glusterbot Title: #9484120 - Pastie (at pastie.org)
21:08 Pupeno joined #gluster
21:08 ghenry joined #gluster
21:08 ghenry joined #gluster
21:08 ws2k3 semiosis ah okay thanks for the info one other question if i would make a 2 node striped replicated volume so 2 servers with 2 disks disk 2 is allways a copy of the other server's disk1 will the store continue to work if one of the machines go down ?
21:09 semiosis dare333: not too familiar troubleshooting these "unable to get lock" errors, but i would try restarting glusterd on all the servers. maybe that will free up the lock...
21:09 dare333 Hmm good idea. I'll give that a try
21:11 semiosis ws2k3: i would do distributed replicated (not striped replicated).  fuse clients will handle server failure automatically with a replicated volume.  nfs clients dont automatically failover (you need to do a VIP or somethign)
21:11 semiosis xoritor: dont know, maybe someone else will have an answer
21:11 ws2k3 hmm so you cannot use fuse clients with striped ?
21:11 ws2k3 or why would u need to use distributed replicated instead of striped replicated
21:16 semiosis stripe is probably not what you want
21:16 semiosis ,,(stripe)
21:16 glusterbot Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
21:16 theron joined #gluster
21:17 daMaestro joined #gluster
21:17 semiosis usually when people say they want stripe they just dont understand what it is & think it's like RAID
21:17 theron joined #gluster
21:17 semiosis but it's not RAID
21:19 ThatGraemeGuy joined #gluster
21:25 sickness ec xlator is like raid ;)
21:27 xoritor xlator is awesome
21:27 xoritor just wish i could figure out how to make it use libgfapi
21:30 _dist semiosis: it may not even be possible through current virsh cli to add a gluster disk, but I'm still reading
21:32 xoritor if anyone knows how to make libvirt use libgfapi to access bd-xlator logical volumes please let me know
21:32 xoritor ;-)
21:33 _dist xoritor: so far without hand editing virsh xml I can't even get a machine to run off libgfapi
21:33 ricky-ticky1 joined #gluster
21:35 xoritor _dist, hand editing is a given
21:35 xoritor _dist, there just simply is no other way to do it
21:36 xoritor Error starting domain: Cannot access backing file /VMs/test: Input/output error
21:36 ricky-ticky joined #gluster
21:36 xoritor libvirt 1.2.7 just gets it ALL wrong
21:36 xoritor and i mean ALL wrong
21:37 xoritor its so FUBAR its SNAFU
21:37 _dist semiosis has a gluster team build of libvirt 1.2.2 that I'm working with on that ppa
21:38 _dist if I have to hand edit xml files, I am just going to use qemu cli
21:38 _dist (but neither is realistic in a prod environement with tech users)
21:38 semiosis upcoming PPAs link: https://launchpad.net/~gluster
21:38 glusterbot Title: Gluster in Launchpad (at launchpad.net)
21:39 _dist to be completely honest I've still not seen a speed difference between fuse and libgfapi in my own tests, so that's always a fallback until it's possible through _normal_ methods to use libgfapi
21:40 xoritor _dist, i have seen a massive difference in not "speed" but responsiveness
21:41 semiosis _dist: wouldn't expect to see speed difference unless you were cpu-bound (which means practically that you are using infiniband & ssd)
21:42 _dist ah, that's fair. I'm using 10gbe & HDDs
21:42 _dist xoritor: if you don't mind, what's a decent way I can test the difference you're seeing?
21:43 xoritor ie... for mail servers the clients have a faster search result show up or email shows up in the trash faster
21:44 xoritor for windows desktops it seems they have fewer issues on roaming profiles when served by file servers
21:45 _dist there must be a reason behind that, how are you using your vms in libgfapi today? (we are using proxmox) but there are a couple reasons I don't want to do that anymore
21:47 xoritor i have quad bonded 1G (moving to 10G soon) over lacp with teamd for active load balancing (was using openvswitch)
21:47 xoritor the systems are a mix of setups
21:48 xoritor some hardware raid, some software raid
21:48 xoritor some stand alone disks
21:48 _dist we have 2x10gbe over lacp, but what I mean is how are you running your guests over libgfapi (cli, hand editing virsh xml, something else?)
21:48 _dist I can't really find a good way to do it
21:48 xoritor oh...
21:49 xoritor i hand edit the xml and import them into libvirt
21:49 xoritor on one host i create them
21:49 xoritor them use virsh  to dump the xml
21:49 xoritor edit it to do all the stuff i need and destroy that domain
21:50 _dist I suppose that isn't the worst solution, I'd love to have some tests to prove to myself there's a significant difference
21:50 xoritor then once i am happy i import it on all the hosts
21:50 xoritor lots of "benchmarks" on the web
21:50 xoritor if you trust them
21:50 xoritor i just have less "issues"
21:50 _dist yeap I've read them, but every time I do something like boot up kali and do tests, the numbers are basically the same for throughput & iops
21:51 xoritor ie... windows deskotps don't just loose connection cause fuse overloads
21:51 xoritor fuse does overload on daily workloads for me
21:51 _dist but to be fair, my "tests" are limited to a few machines, our prod (which is in fact libgfapi through proxmox) doesn't have issues
21:52 sickness anyone tried to build the server on win32 or cygwin? (I've googled but found nothing about)
21:52 xoritor ie... mail, some virtual desktops for remote workers, file sharing, printers, web sites, etc...
21:52 xoritor sickness, no sorry
21:53 sickness tnx
21:53 _dist well, I've never run anything for long periods of time on fuse other than file sharing. But you'd think there'd be specific types of stress you could put both under to verify the difference
21:56 xoritor it may just be that i HATE fuse ;-)
21:56 xoritor how are you mounting fuse? loopback or socket?
21:57 _dist loopback, but I've run where a file wasn't healed on the local brick before
21:58 _dist though, not sure if I actually tested it much in that state
21:58 xoritor hmm
21:58 xoritor a fuse issue maybe?
21:58 xoritor not sure on that one
21:58 xoritor that could be something else entirely
21:58 _dist well, right now I'm loading up a kali with a libgapi and a fuse virtio drive
21:59 _dist simplest way I can think of to do side by side compares
21:59 xoritor kali has some good tools, but not sure what you would really run to load the io down
21:59 xoritor not that would be real world
22:00 _dist that's fair, but it also leaves "real world" pretty unqualified
22:00 xoritor well... real world is different for everyone
22:00 xoritor my load is not yours
22:01 xoritor each mail server gets different hits in amounts of ham and spam
22:01 _dist that's fair, an io playback from a prod week on libgfapi against fuse might be a good demo
22:02 Pupeno_ joined #gluster
22:02 xoritor that it would
22:02 xoritor a big capture though
22:04 _dist you could just do meta instead of real capture
22:04 ninkotech_ joined #gluster
22:05 xoritor hmm... may not get the real I/O then
22:05 _dist It's just pretty frustrating that a year after its' release the only UIs to manage libgfapi vm storage (its' sole purpose?) are not elegant enough for normal users to consider :)
22:06 msvbhat_ joined #gluster
22:06 SteveCooling joined #gluster
22:06 _dist I guess we're all waiting for libvirt to "get it right"
22:06 xoritor lol
22:06 xoritor very much so
22:07 xoritor libvirt is having some serious issues if you ask m
22:07 xoritor e
22:07 xoritor i have been very much considering just NOT using libvirt
22:07 xoritor there is nothing I am really doing that i HAVE to use libvirt for
22:08 _dist having a nice ui to create/modify machines is a pretty big deal
22:08 xoritor sure libvirt makes some things easier... but they make some things WAY harder
22:08 xoritor yes
22:08 _dist but I agree that lately it doesn't always make things easier
22:08 xoritor you can always install on a system and move it over then boot it cli with qemu
22:08 _dist newer versions of its' vfio/pci-stub have required me to hand script those machines (because of bad flr support or other problems)
22:09 xoritor yep
22:09 xoritor many issues
22:09 _dist but, migration and live HW mods are so much less fun without a management tool
22:10 xoritor ture
22:10 atrius` joined #gluster
22:10 xoritor s/ture/true/
22:10 glusterbot What xoritor meant to say was: true
22:10 _dist :)
22:10 xoritor if you have it in shared storage though you can easily use pcs to monitor a  process and do it that way
22:11 _dist yeap, that's linux for you :)
22:11 xoritor or any other sort of cluster system
22:11 bennyturns joined #gluster
22:12 xoritor then you really just have to have a script that contains the qemu start up line you need to start with your arguments
22:12 _dist xoritor: do you use virt-manager with your libvirt or?
22:12 xoritor right now yes, but it is being a real PITA
22:13 xoritor causing me more headache than help
22:13 _dist agreed, which was my next question (which version do you use that doesn't rewrite your xml the wrong way on you randomly)
22:13 chirino joined #gluster
22:13 xoritor _dist, I have not found one that does it right yet
22:14 xoritor they all mess something up somewhere some how
22:14 xoritor especially when it comes to glusterfs
22:15 xoritor do not go editing anything once you have made changes that add in glusterfs support or you may very well break that
22:15 _dist so it makes me consider straight cli, given ovirt another chance (shudder), pressuring proxmox to get dbg/new gluster
22:15 _dist because at least in my tests, heal info is useless for VMs until 3.5x
22:15 xoritor yea, i am really thinking about doing cli direct and maybe using pcs
22:16 xoritor LOL
22:16 xoritor yea well heal info is MUCH better now
22:16 _dist pcs?
22:16 xoritor clusterlabs.org
22:17 xoritor pcs is a tool to build clusters and even has a web gui
22:17 xoritor wui
22:17 xoritor lol
22:17 _dist :)
22:17 xoritor easy, fast, low overhead, stable
22:19 xoritor you can even have it monitor the services running inside your VMs
22:20 _dist JoeJulian: you around? the other day you said you'd rec 3.4.5 + patch over 3.5.2 for VMs, was there a strong reason for that?
22:22 xoritor i would like to hear that answer as well
22:23 _dist I took one of my proxmox nodes down for this trial build, but I'm not satisfied. So I honestly think I'm going to stay on proxmox, I'll build my own qemu and gluster for it if they won't (because heal info is important to me)
22:28 xoritor yes same here
22:33 xoritor although i do not use proxmox and do not have to stay with anything
22:33 xoritor the heal info is important
22:33 xoritor i may try a pcs cluster
22:35 xoritor if i can get qemu to run correctly with libgfapi at the cli the way i need it then i can build a cluster and use the bdxlator stuff the way i want and not have to deal with the way someone else says i "have" to do things
22:36 xoritor good luck
22:44 ninkotech joined #gluster
22:47 Pupeno joined #gluster
22:49 Pupeno joined #gluster
22:51 Nowaker _dist, xoritor: i'm developing virtkick, www.virtkick.io, and I planned support for Gluster. You may find it useful when it's released. btw, when I'm on modifying libvirt domain xmls, I will make sure VirtKick doesn't override anything - you outlined a nasty problem in existing solutions. I recall this in virt-manager, very annoying.
22:52 Nowaker uhh, they are gone, just scrolled the log to the bottom. will paste it to them next time.
23:00 gildub joined #gluster
23:22 rwheeler joined #gluster
23:33 jcope joined #gluster
23:34 Ramereth JoeJulian: you around?
23:47 jcope left #gluster
23:50 Pupeno_ joined #gluster
23:53 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary