Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:40 gildub joined #gluster
00:45 gildub joined #gluster
00:47 PeterA1 joined #gluster
01:02 PeterA joined #gluster
01:03 rotbeard joined #gluster
01:05 plarsen joined #gluster
01:07 PeterA1 joined #gluster
01:31 bala joined #gluster
01:33 Lee- joined #gluster
02:13 kdhananjay joined #gluster
02:27 kdhananjay joined #gluster
02:30 haomai___ joined #gluster
02:35 sputnik13 joined #gluster
02:36 NCommander left #gluster
02:58 bala joined #gluster
02:59 _Bryan_ joined #gluster
03:17 Pupeno joined #gluster
03:20 haomaiwa_ joined #gluster
03:21 recidive joined #gluster
03:39 haomai___ joined #gluster
03:58 sputnik13 joined #gluster
04:09 recidive joined #gluster
04:16 Jay joined #gluster
04:17 Jay hey
04:17 Jay setting up gluster for libvirt environment - have 30 servers w/ 4 bricks each, should i do a stripe replicate setup?
04:22 sputnik13 joined #gluster
04:27 Jay any insights?
05:20 JoeJulian @stripe
05:20 glusterbot JoeJulian: Please see http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/ about stripe volumes.
05:20 JoeJulian Jay: in other words, probably not.
05:24 Jay Thanks
05:26 Jay So stripe = no, then the next item is replicate - I have 30 servers, I was thinking in configuring them in 3s - replica 3.  Are there any concerns with doing that vs replica 2?
05:33 JoeJulian Replication levels should be based on mathematical calculations to meet your SLA.
05:35 JoeJulian http://www.eventhelix.com/realtimemantra/faul​thandling/system_reliability_availability.htm#.U-2cIFRdWV4
05:35 glusterbot Title: System Reliability and Availability Calculation (at www.eventhelix.com)
05:37 JoeJulian @learn sla calculations as Calculate your system reliability and availability using the calculations found at http://www.eventhelix.com/realtimemantra/faul​thandling/system_reliability_availability.htm . Establish replica counts to provide the parallel systems to meet your SLA requirements.
05:37 glusterbot JoeJulian: The operation succeeded.
05:53 _pol joined #gluster
05:53 _pol_ joined #gluster
05:54 jiku joined #gluster
06:46 LebedevRI joined #gluster
06:52 _pol_ joined #gluster
06:59 ricky-ti1 joined #gluster
07:07 ekuric joined #gluster
07:08 ctria joined #gluster
07:11 Humble joined #gluster
07:14 purpleidea joined #gluster
07:31 mbukatov joined #gluster
07:35 ramteid joined #gluster
07:35 tryggvil joined #gluster
07:53 mator joined #gluster
07:54 mator hello. what is the proper way to remove one server from glusterfs cluster with distributed volume, since hardware (storage) on this server is failed?
07:54 mator thanks
07:54 mator remove bricks from volume first or just do peer detach ?
08:14 rolfb joined #gluster
08:24 ws2k3 hello, if i wanne create a volume with 2 idential servers which both have a full dataset then this is a good command right gluster  volume create datapoint replica 2 transport tcp  gluster1:/mnt/gluster  gluster2:/mnt/gluster
08:31 ws2k3 is there a command i can run on the client to see which glusterfs server i'm connected now ?
08:32 ws2k3 i made a 2 node replicate cluster mounted the first one thenshutdown that server so see if it would automaticly switch to the other server and it does but is there a command i can use to see that ?
08:37 mator ws2k3, mount ?
08:39 crashmag joined #gluster
08:42 ndevos ws2k3: no, you can not directly see that with a command, a client will connect to all the replicas at the same time, not to one or the other
08:42 ndevos ws2k3: you can see if the connection to a brick has been lost/re-established in the logs of the client
08:46 ws2k3 ndevos okay so now i  shutdown server 1 i changed some files from the client(on server 2) now server 1 is backonline but it seems that server 1 does not automaticly get the new changes how can i let server1 join the cluster again
08:48 ndevos ws2k3: there is a self heal daemon that will sync the changes after a certain time (default interval-check is 10? minutes)
08:48 ndevos ws2k3: you can also ,,(self heal) the files manually
08:48 glusterbot ws2k3: I do not know about 'self heal', but I do know about these similar topics: 'targeted self heal'
08:48 ndevos @targeted self heal
08:48 glusterbot ndevos: https://web.archive.org/web/20130314122636/htt​p://community.gluster.org/a/howto-targeted-sel​f-heal-repairing-less-than-the-whole-volume/
08:52 ws2k3 and how would i add a 3th server(full replica)
08:53 _pol joined #gluster
08:56 calum_ joined #gluster
09:08 ws2k3 cause i have a 3th server now the probe was succesvol but when i try to add it it says Incorrect number of bricks supplied 1 for type REPLICATE with count 2
09:09 vimal joined #gluster
09:14 mator left #gluster
09:16 ndevos ws2k3: you should be able to do it with a command like this: gluster volume add-brick $VOLNAME replica 3 $SERVER:$PATH_TO_BRICK
09:18 ws2k3 i am trying but it does not work
09:18 ws2k3 i use gluster volume add-brick replica 3 datapoint 10.1.2.9:/mnt/gluster
09:19 ndevos you have to <-> the "replica 3" and datapoint
09:20 ws2k3 hmm i dont understand what you mean with <->
09:20 ndevos the command should like like this: gluster volume add-brick datapoint replica 3 10.1.2.9:/mnt/gluster
09:21 ws2k3 error: wrong brick type: replica, use <HOSTNAME>:<export-dir-abs-path>
09:22 ndevos hmm, "gluster volume help" shows this for me: volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force] - add brick to volume <VOLNAME>
09:22 ws2k3 yes i know i already googled and example are saying the same as you are but it realy gives me this error
09:22 ndevos ws2k3: what version are you running?
09:22 ws2k3 how can i check that ?
09:22 ws2k3 i installed from repository so i dont know
09:22 ndevos rpm -q glusterfs-server ?
09:23 ws2k3 i use ubuntu not red hat
09:23 ndevos dpkg <something>?
09:24 ndevos or glusterfs --version
09:24 ws2k3 3.2.5
09:24 ndevos oh, wow, thats pretty old and probably does not support a replica 3
09:25 ws2k3 ah okay well its from the ubuntu repository
09:25 ndevos 3.5.2 is the most current stable version, or 3.4.5
09:25 ndevos you probably want to use the version from the ,,(ppa)
09:25 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 stable: http://goo.gl/cVPqEH -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
09:25 ws2k3 glusterfs 3.2.5 built on Jan 31 2012 07:39:58 pretty old
09:27 ws2k3 let me see if i can upgrade
09:44 ndevos ws2k3: upgrading is not trivial, it is probably easier to remove the packages, delete any config files and install the new version
09:44 ndevos at least, that is if you dont have any production data on the volumes yet
09:45 ws2k3 yeah that what i ment
09:46 ws2k3 no they are not production volume just testing and practesing in vmware
09:46 ws2k3 we do plan to take it in production someday
09:47 edward1 joined #gluster
09:48 ws2k3 ndevos is it possible to expand glusterfs to another datacenter ?
09:48 ws2k3 so i can read and write to the cluster in both datacenter the lacent between them is around 120 ms
09:49 Norky joined #gluster
09:49 ndevos ws2k3: if the link is very stable it should work, but geo-replication (master site read/write, slave site read-only) might be more suitable
09:50 ndevos ws2k3: you can have different volumes and mark them master/slave accordingly, so each site would contain a backup of the other
09:51 ws2k3 the link is not very stable it drops sometimes
09:54 ndevos ws2k3: that could be an issue, when the link drops, one side would become read-only
09:54 ndevos ws2k3: or, you can run into ,,(split-brain) situations and need an admin to resolve it
09:54 glusterbot ws2k3: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
10:04 ws2k3 now it works i can add the 3th server on the fly
10:04 ws2k3 so i can read and write to any machine right ?
10:05 ws2k3 and cuase i use replicate the data on all machine should be identical right ?
10:08 ws2k3 ndevos any idea why i cannot found the glusterfs server with nfs ?
10:09 ndevos ws2k3: glusterfs contains its own nfs server, but you need to have rpcbind or portmap running before you (re)start the glusterd service
10:10 ndevos ws2k3: you can also not (or, rather should not) run a nfs-client on the storage servers if they export the volume(s) over nfs
10:20 ws2k3 ndevos i noticed a glusterfs server will automaticly download all the new data when he has been down ?
10:22 ws2k3 unless data is written on it was unable to communicate with the other nodes int he cluster only then a split brain occours right ?
10:31 Zordrak joined #gluster
10:44 Pupeno_ joined #gluster
10:46 ndevos ws2k3: yes, a split brain occurs when both sides of the volume get changes to the same file/dir and they can not get merged automatically
10:47 ndevos ws2k3: if both sides change different files, they normally should get healed automatically and you would not see a split-brain
10:53 _pol joined #gluster
10:56 ws2k3 and how do i know when a split brain accours
10:58 Pupeno joined #gluster
11:01 diegows joined #gluster
11:02 qdk joined #gluster
11:08 andreask joined #gluster
11:09 Gugge joined #gluster
11:14 gildub joined #gluster
11:28 Pavid7 joined #gluster
11:42 gildub joined #gluster
11:47 lunux joined #gluster
11:55 Pupeno_ joined #gluster
12:00 lunux joined #gluster
12:10 chirino joined #gluster
12:16 mojibake joined #gluster
12:35 nbalachandran joined #gluster
12:35 chirino_m joined #gluster
12:42 plarsen joined #gluster
12:53 ninthBit joined #gluster
12:54 _pol joined #gluster
13:04 ws2k3 i have a question about a distributed replicated volume if i have 2 servers with both 2 disks can i make that from 1 disk 1 replica is kept on the other machine? that way i can allways heal/repair the data if one server goes down right ?
13:11 Andreas-IPO_ joined #gluster
13:13 hflai_ joined #gluster
13:16 rolfb joined #gluster
13:17 tty00_ joined #gluster
13:17 rwheeler_ joined #gluster
13:17 C_Kode Yes.
13:17 C_Kode You can basically do a raid1 setup.  Where server1's data is mirrored on server2
13:17 tdasilva joined #gluster
13:18 theron joined #gluster
13:18 ws2k3 yeah that is what i'm looking for but i dont entirely understand how i should do that
13:19 C_Kode The docs explain how to do this.
13:19 B21956 joined #gluster
13:19 C_Kode When you are adding the bricks to the cluster, you just need to express the command in the proper order to do it.  The docs explain how to do this
13:20 diegows a newbie question, is it ok to mount a volume from multiple clients?
13:22 kkeithley diegows: yes, just like NFS (it wouldn't be much use otherwise)
13:22 diegows ok, just to confirm... thanks...
13:23 diegows what's your name? I want to remember it in case I loose some data in the future :)
13:23 Debolaz joined #gluster
13:23 ws2k3 lol
13:24 kkeithley community glusterfs comes with a 200% money back guarantee.
13:24 Pavid7 joined #gluster
13:25 diegows nice
13:25 diegows :)
13:26 Debolaz I'm having a little problem with my volume. It's a replication volume with 3 bricks. But 1 of the bricks is apparently offline, and I can't figure out why. All processes are running on all nodes. Which log file should I be looking at to identify problems with the local node brick?
13:27 bennyturns joined #gluster
13:28 Debolaz And that very moment, I discover /var/log/glusterfs/bricks/glusterfs-brick.log :P
13:28 twx joined #gluster
13:28 bene2 joined #gluster
13:30 capri joined #gluster
13:40 ninthBit I think we are learning the hard way about glusterfs and POSIX UID/GID between different linux servers ..... it would be nice if gluster when during an identification of a heal status might indicate why a file is not "healing" and if it is because glusterfs is having issues with the UID/GID between the nodes for the files. Or have we greatly missed where this would have been easily identified somewhere in gluster?
13:43 ninthBit it is a combination of Active Directory users writing to the gluster volume through samba and our poorly synchronized AD users to UID between the peers.  then when gluster is dealing with a gluster pointer file it always lists these files in the heal status.  i am working on getting exactly how to reproduce. we have the tools that can trigger it but not the specifics to manually get it done
13:44 ninthBit i will follow up with more information about it. some of it is user error but the other is gluster works when the files have not been moved to another replica-set volume because of a rename and the other replica-set has the link file.
13:45 hagarth joined #gluster
13:45 ninthBit ok clear that up later.. going back into the server to get this figured out and post up a report on what i can dig out
13:46 SmithyUK Hey, having a problem with v3.5.2 - mkdir is returning invalid argument when trying to create any folders on a gluster mount
13:47 SmithyUK -bash-3.2$ mkdir df16fd1a9f8d87205beeaca9bef777bd
13:47 SmithyUK mkdir: cannot create directory `df16fd1a9f8d87205beeaca9bef777bd': Invalid argument
13:47 SmithyUK any ideas why that might be? has been remounted since upgrade
13:48 SmithyUK strace output mkdir("df16fd1a9f8d87205beeaca9bef777bd", 0777) = -1 EINVAL (Invalid argument)
13:49 C_Kode Works for me.
13:51 ndevos SmithyUK: you should check if the parent directory (the dir where you do the mkdir) is in a ,,(split-brain) situation
13:51 glusterbot SmithyUK: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
13:54 msmith_ joined #gluster
14:00 wushudoin| joined #gluster
14:07 kumar joined #gluster
14:10 SmithyUK ndevos: it's not a replicate volume
14:10 ndevos SmithyUK: directoties are always replicated, even on a distribute volume :)
14:11 SmithyUK ndevos: ah, so how would i check for the problem? $ sudo gluster volume heal videostore-las info split-brain Volume myvol is not of type replicate
14:12 ndevos SmithyUK: hmm, I'd check on the bricks, see if 'getfattr -m. -ehex -d $PART_TO_DIR' returns different gfids for different bricks
14:16 SmithyUK ndevos: they seem to all be trusted.gfid=0x00000000000000000000000000000001
14:16 ndevos SmithyUK: if that is the root of the volume, than that's fine
14:17 SmithyUK ndevos: it is yeah
14:18 ndevos SmithyUK: i'm not sure what else could cause issues then.... maybe directory permissions or something of that kind
14:19 SmithyUK after a remount i'm getting a slightly different issue... i can now use mkdir but can't remove the dir afterwards. -bash-3.2$ rmdir 0012ea1c5aa3095733053a01a818cbe2
14:19 SmithyUK rmdir: 0012ea1c5aa3095733053a01a818cbe2: Transport endpoint is not connected
14:19 SmithyUK that was after a umount -l /mnt/gluster
14:23 sputnik13 joined #gluster
14:23 ndevos you should not use 'umount -l' if possible, the mount will then be kept open until the last user exists, it's often pretty confusing if you need to read logs (two processes writing to the same log, when you expect only one)
14:24 ndevos SmithyUK: you should check the logs under /var/log/glusterfs/$PATH_TO_MOUNTPOINT.log and see what caused the "Transport endpoint is not connected"
14:24 SmithyUK ndevos: you just completely reminded me that it might be logged! "disk layout missing"
14:25 SmithyUK so fix-layout i presume?
14:26 ndevos yeah, fix-layout could work, but it's strange that its missing...
14:27 SmithyUK hmm, all failed on status page
14:27 SmithyUK localhost                0        0Bytes             0             1             0    fix-layout failed               0.00
14:29 recidive joined #gluster
14:39 SmithyUK Hi guys, fixed by unmounting on all hosts, restarting gluster daemons on all peers and remounting
14:39 SmithyUK Seems to be working fine now
14:55 _pol joined #gluster
15:03 ira joined #gluster
15:04 mojibake Previously as seen in #gluster it was suggested for dealing with PHP to "glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH --fopen-keep-cache" Does anyone know how to translate that into /etc/fstab? I know how to set "defaults,direct-io-mode=off" but other options like fopen-keep-cache don't seem to work.
15:09 cwray joined #gluster
15:17 ndevos mojibake: /sbin/mount.glusterfs is a shell script, you can check there what options it does (not) support
15:17 ndevos mojibake: if options are missing, you can file a bug for those
15:17 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:21 _pol joined #gluster
15:22 bit4man joined #gluster
15:23 daMaestro joined #gluster
15:35 Norky joined #gluster
15:35 nbalachandran joined #gluster
15:36 mojibake ndevos: Thank you.
15:37 ndevos you're welcome, mojibake
15:38 sputnik13 joined #gluster
15:40 qdk joined #gluster
15:50 gmcwhistler joined #gluster
15:59 theron joined #gluster
16:12 calum_ joined #gluster
16:29 theron joined #gluster
16:30 msmith_ joined #gluster
16:46 tom[] joined #gluster
16:50 qdk joined #gluster
17:06 tryggvil joined #gluster
17:08 clyons joined #gluster
17:10 clyons joined #gluster
17:10 msmith_ joined #gluster
17:15 calum_ joined #gluster
17:18 plarsen joined #gluster
17:21 zerick joined #gluster
17:30 julim joined #gluster
17:32 lunux joined #gluster
17:33 nbalachandran joined #gluster
17:33 _pol joined #gluster
17:45 dblack joined #gluster
17:52 MacWinner joined #gluster
17:52 wushudoin joined #gluster
17:54 caiozanolla ppl, gluster is showing duplicate directories on "ls"!!! content from bricks are not dupplicated. anyone?
17:54 caiozanolla version 3.5.2
17:57 caiozanolla ps. its happening at only one client, other clients see the correct directory structure. remounted several times, still listong duplicates.
17:57 caiozanolla saw it happening once couple weeks ago when we had a split brain situation
17:58 chirino joined #gluster
18:15 PeterA joined #gluster
18:18 JoeJulian caiozanolla: Are you still using your hand-written vol files?
18:19 JoeJulian (that *was* you that I'm remembering, wasn't it?)
18:19 caiozanolla JoeJulian, no, did it once on one client. all dinamically generated now
18:20 JoeJulian Hurray!!!
18:20 JoeJulian I'm not losing my mind yet.
18:20 JoeJulian check the client log
18:21 mojibake Doing some testing and it looks like client lost connectivity..
18:21 mojibake Getting the following error.
18:21 mojibake ls: cannot access /mnt/gfs/web-content/: Transport endpoint is not connected
18:22 JoeJulian network problem?
18:22 mojibake mount shows "mounted". But looks like client lost communication.
18:22 mojibake Would like to investigate/understand what went wrong..
18:23 mojibake Should I try to telnet to the gfs ports on master?
18:23 JoeJulian maybe that too, but after it's mounted your concern is more likely the brick port.
18:24 mojibake connect to brick port OK.. (brick port from output of gluster volume status)
18:24 mojibake OK using telnet that is.
18:25 _dist joined #gluster
18:25 Spiculum http://gluster.org/pipermail/glust​er-users/2013-November/037829.html
18:25 Spiculum can anyone get to that link?
18:25 Spiculum and not get "not found"
18:26 mojibake JoeJulian: It is replica 2, telnet to replica brick OK too.
18:27 _dist so, going my own build for virt JoeJulian: should I use 3.4 or 3.5 ?
18:28 mojibake JoeJulian: Nothing in client glusterfs.log since mounting/connecting 4hrs ago. Was working until 15mins ago..Maybe running ab against it blew something up.
18:28 mojibake ab against apache with gfs behind it.
18:28 JoeJulian _dist: My own preference is still 3.4.5 with the patch from http://review.gluster.org/8402
18:29 glusterbot Title: Gerrit Code Review (at review.gluster.org)
18:29 recidive joined #gluster
18:31 caiozanolla JoeJulian, cannot make any sense of it. btw, this is a client on the same machine as the server. strange, it shows different version of gluster, 3.3 vs 3.5.2, installed from gluster repo. http://pastie.org/9476224
18:31 JoeJulian mojibake: Odd that there would be nothing in the client log. Is the partition that /var/log is on full?
18:31 glusterbot Title: #9476224 - Pastie (at pastie.org)
18:32 kkeithley Spiculum: no, not me. Considering everything on http://gluster.org/pipermail/​gluster-users/2013-November/ is http://gluster.org/pipermail/glust​er-users/2013-November/01xxxx.html I'm suspicious of your URL. Where does that come from?
18:32 glusterbot Title: The Gluster-users November 2013 Archive by thread (at gluster.org)
18:35 mojibake JoeJulian: /var/log/ was not full. But I rebooted before you commented that.. Mount came back up after reboot. Tried to unmount without reboot. But could not unmount because said something was using it.. But could not lsof, because the mount the messed up...
18:35 mojibake Will keep testing and let you know if behavior is repeatable.
18:36 caiozanolla oh man, that is really frustrating. I've been doing nothing out of the ordinary, standard replicated setup, 2 clients (besides a client on each server) and been having all sorts of problems, self healing daemon is non functional, clients listing dupplicate files, unsolvable splitbrain, its driving me nuts.
18:38 JoeJulian caiozanolla: Since you've found that one version discrepancy, maybe that's related.
18:40 caiozanolla the only reason im fighting it is that I know I can reach for and rebuild the whole filesystem by scavenging files from bricks, other than I really need this to work, for I architected a whole solution based on this.
18:41 JoeJulian And it works very reliably for a large number of users. So what's different about your case?
18:41 caiozanolla seriously, documentation is a mess, and thanks god you and semiosis are here to help
18:41 JoeJulian ... which, I understand, is nearly and impossible question to ask.
18:42 _dist JoeJulian: that patch is pretty recent, so you'd recommend I download the July 23rd source, patch it and compile it?
18:42 JoeJulian caiozanolla: I just wonder if the metadata's gotten into an impossible state due to some inconsistency when you tried using your own vol file.
18:43 JoeJulian caiozanolla: Didn't you also rsync from one brick to another, or am I confusing that with someone else.
18:43 JoeJulian _dist: which distro?
18:44 _dist debian
18:44 _dist wheezy, sorry
18:44 JoeJulian Ah, right... then yes. :D
18:45 _dist ok, honestly it looks like I'm going to have to build my own debs for everything anyway :)
18:45 caiozanolla JoeJulian, that file I used was for io-cache option testing, it was copied from the dinamically generated and I just appended the cache size option. reverted imediatelly. this thing about duplicate directories happened on my 1st week using gluster when I had a crash because of a full /tmp
18:45 JoeJulian _dist: We're setting up a group-managed repo on launchpad. I suspect this will lead to maintained debian builds.
18:46 JoeJulian _dist: any volunteers? ;)
18:46 _dist ok, I'm willing to help out for ubuntu & debian, I'd like to add a qemu-kvm & libvirt compile too since none of the default repos have glusterfs support compiled into either package
18:46 JoeJulian caiozanolla: ok. Not trying to point fingers, just trying to think of all the possibilities.
18:47 caiozanolla then self healing daemon took from there and everything started to run smoothly. (obviously I had to manually heal some files in split-brain) but it was going on just fine. then we had to swap servers, and out of nowhere self healing daemon stopped.
18:47 sonicrose joined #gluster
18:48 caiozanolla semiosis tried to help, ive showed him some logs, he mentioned selinux, iptables, which I had disabled, still, shd is not working
18:48 _dist JoeJulian: I talked to semiosis about it months ago, but decided to go with proxmox at the time. However, I can't stand being tied to specific packages anymore, I want my freedom back
18:48 JoeJulian _dist: I would like that too. https://launchpad.net/~gluster
18:48 glusterbot Title: Gluster in Launchpad (at launchpad.net)
18:48 theron joined #gluster
18:48 JoeJulian Hehe, I hear that.
18:48 caiozanolla now this dupplicate dir thing is happening on one client
18:49 JoeJulian caiozanolla: Let me get this performance data assembled and sent out (I was supposed to do this days ago) and I'll be able to concentrate more on what's ailing you.
18:50 caiozanolla JoeJulian, thanks man
18:51 _dist JoeJulian: it looks like this already contains qemu-kvm & glusterfs for ubuntu, missing libvirt though
18:52 JoeJulian Plus, I think that's older.
18:54 rotbeard joined #gluster
18:54 _dist ok, well I've never done a launchpad build before but I can make the debs. I'd love it if we gave out seapate options like with/without other hypervisors (libvirt), or we could just agree on the our "preferred" build for qemu gluster
18:54 _dist I'm sure it's not all that difficult, I'll just make my own practice one first, unless someone'll walk me through it
18:55 JoeJulian I haven't done it through lp yet either.
18:55 _dist ok, I'll put some hours in this weekend then, it would be so much nicer than using checkinstall all the time
18:56 JoeJulian And that patch I referenced isn't in yet. We should add a -testing ppa also, imho.
19:00 _dist sounds good, my goal this weekend will be to get a build vm together for wheezy and trusty, I'm assuming lp does the compile and probably has some kind of meta file(s) but I'll look into that over a few drinks on sat
19:01 JoeJulian semiosis: Can you pitch in some info regarding launchpad and building ppas?
19:01 Spiculum kkeithley: google, i wanted to find out if you can convert a distributed volume into a replicated one
19:17 caiozanolla JoeJulian, here is the story, it might help you have other ideas. http://pastie.org/9476322
19:17 glusterbot Title: #9476322 - Pastie (at pastie.org)
19:26 jruggiero joined #gluster
19:36 Philambdo joined #gluster
19:37 kkeithley Spiculum: you can, by adding more bricks. And with some fiddling you can consolidate a distributed volume, i.e. remove a brick, then recycle it and add it back as a replica.
19:38 kkeithley In the simple case, if you have a three brick distribute volume, you'd have to add three more bricks to make it into a replicated volume.
19:39 sjm left #gluster
19:42 Spiculum oh i see, it might just be easier for me to create a new replicated volume and copy everything over
19:42 Spiculum i was just hoping to save time before copying 3tb
19:43 Spiculum thanks for the info
19:51 gts joined #gluster
20:18 Pupeno joined #gluster
20:21 semiosis JoeJulian: pitch in what?  where?
20:24 _dist semiosis: I'm going back to a custom vm build, I'm willing to help build debian & ubuntu compiles for all the virt tools and gluster that work in that use case
20:24 _dist but I'm not familiar with lp at all
20:25 semiosis _dist: using ubuntu trusty?
20:26 Pupeno_ joined #gluster
20:26 _dist yeah trusty makes sense now
20:27 _dist I'm going to have to maintain debs anyway for glusterfs, qemu-kvm & libvirt
20:27 semiosis why?
20:28 _dist https://launchpadlibrarian.net/18099341​3/buildlog_ubuntu-trusty-amd64.qemu_2.0​.0%2Bdfsg-2ubuntu1.2_UPLOADING.txt.gz (glusterfs: no)
20:28 semiosis right
20:28 semiosis ok
20:29 semiosis so afaik all you need is qemu built with glusterfs, which you can find here: https://launchpad.net/~gluster
20:29 glusterbot Title: Gluster in Launchpad (at launchpad.net)
20:29 semiosis depending on which version of glusterfs you need
20:29 _dist well, jj was suggesting the need to pull patch http://review.gluster.org/#/c/8402/ , and libvirt isn't in that ppa
20:29 glusterbot Title: Gerrit Code Review (at review.gluster.org)
20:30 semiosis are you sure you need a special libvirt?  i thought that was only for precise
20:30 _dist ah, let me check actually, that one I just assumed
20:31 semiosis ok if you want to use patches then please build your own packages, that would be very helpful
20:31 semiosis otherwise i hope that what i put in the gluster PPAs is enough
20:32 semiosis i'm not even inclined to support qemu packages for precise
20:32 _dist https://launchpadlibrarian.net/172804​408/buildlog_ubuntu-trusty-arm64.libv​irt_1.2.2-0ubuntu13_UPLOADING.txt.gz gluster no, yes for everything else though :)
20:32 semiosis at least not until people ask for it
20:32 semiosis hmm
20:32 _pol_ joined #gluster
20:32 semiosis ok I'll upload libvirt tonight
20:33 _dist perfect, also is that ppa linked off the gluster site? maybe put it in a text file located http://download.gluster.org/pub/gl​uster/glusterfs/3.4/LATEST/Ubuntu/
20:33 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/LATEST/Ubuntu (at download.gluster.org)
20:34 semiosis yes, i will
20:34 Pupeno joined #gluster
20:34 semiosis the new team section is still a work in progress
20:35 semiosis so, you want to make your own PPA?  here's a quick crash guide...
20:35 semiosis 1. create launchpad account & make a PPA
20:36 semiosis 2. make a GPG key & add it to your LP account
20:36 tom[] joined #gluster
20:37 semiosis 3. get a source package (which is usually two tarballs: the original source tree & a .debian.tar.gz)
20:38 semiosis 4. unzip the source tree, then unzip the debian.tar.gz into the top of the source tree
20:39 JoeJulian semiosis: qemu packages.... I'm being asked already.
20:39 semiosis 5. update the debian/changelog file to have your launchpad email address
20:41 semiosis 6. debuild -S -sa & go up a folder
20:41 semiosis 7. dput ppa:username/ppaname the-package-file.changes
20:42 semiosis JoeJulian: i'm making packages for qemu in the qemu-* ppas
20:42 semiosis JoeJulian: is there something else you wanted?
20:43 JoeJulian Not that I know of... You just said you weren't inclined...
20:44 semiosis JoeJulian: you want qemu for ubuntu 12.04 precise?
20:44 semiosis ehhhh
20:44 JoeJulian unfortunately.
20:45 semiosis JoeJulian: any plans to upgrade to trusty?
20:46 semiosis october is a great time to upgrade LTS releases (that's when they do the .1)
20:46 JoeJulian I highly doubt that's going to change before the end of the year.
20:47 JoeJulian Too many other irons in the fire to add another variable.
20:47 plarsen joined #gluster
20:52 uebera|| joined #gluster
20:52 uebera|| joined #gluster
20:54 semiosis JoeJulian: want to contribute the precise packages?
20:54 semiosis since you're on the team now :)
20:54 JoeJulian That's my plan, yes.
20:54 semiosis woo
20:54 semiosis so, see my 7 steps to PPA happiness above
20:54 JoeJulian I'll probably work on that during the next two weeks vacation.
20:55 JoeJulian because that's how I vacation...
20:55 semiosis kick back, relax, upload
20:56 semiosis oh i should note re #2 above, the email addr on the gpg key must match the email addr in the top changelog entry and also be listed as an email addr in your LP profile
20:57 JoeJulian So I think I'm going to add a -testing for mid-release pulls that need testing (usually just by me it seems) and migrate them into release once they're proven. What do you think?
20:57 B21956 joined #gluster
20:58 _dist thx semiosis, I'll give it a try, but if you're going to upload libvirt I'll only need to do it if/when I need special compile opts
20:58 _dist (for ubuntu anyway)
20:58 semiosis right
20:58 semiosis so stay tuned for libvirt, hopefully tonight
20:59 semiosis JoeJulian: i'd suggest practicing on your own PPAs first before jumping into the team PPAs
20:59 JoeJulian Sure
20:59 semiosis one thing to note, you can only have one version of a package per release
21:00 semiosis so for example, it wont let you upload a version older than whats already in the PPA, and if you upload a newer one the current will be deleted
21:00 semiosis after a successful build
21:00 _dist yeah you'd need different ppas
21:00 semiosis which is why I have PPAs like gluster-3.4
21:01 semiosis package version rules are mysterious, i still dont quite get it
21:01 JoeJulian I've done up-to step 7 several times now. Just haven't done the LP part.
21:01 semiosis so you're using pbuilder to build packages then?
21:02 semiosis _dist: why would you need a patch to the fuse xlator for qemu?
21:02 JoeJulian Looks like ubuntuN would be for the Nth package release of a version that has the same patch applied to all ubuntu versions. The ~preciseN would be for a patch that only affects the named release.
21:02 semiosis (just realized what http://review.gluster.org/#/c/8402/ is for)
21:02 glusterbot Title: Gerrit Code Review (at review.gluster.org)
21:04 semiosis JoeJulian: specifically, why does LP say that qemu 2.0.0+dfsg-2ubuntu1.3 is newer than 2.0.0+dfsg-2ubuntu1.3~gluster345.trusty2 ?
21:05 semiosis i think i need to do 2.0.0+dfsg-2ubuntu1.3gluster345~trusty2 instead
21:05 JoeJulian yes, i'm using pbuilder to build packages.
21:05 JoeJulian Then I put them in our chef recipes for ops to install.
21:07 JoeJulian Oh, right... if he's using libgfapi then he shouldn't hit that bug.
21:07 semiosis ah you're jj!  i didnt know who _dist was talking about
21:08 recidive joined #gluster
21:08 _dist hah :)
21:08 _dist sorry about that
21:08 semiosis np lol
21:14 JoeJulian 2.0.0+dfsg-2-gluster345-ubuntu1.3~trusty2
21:15 JoeJulian upstream_version-ppa_version-debian_revision
21:16 aaronott joined #gluster
21:17 semiosis problem then is that the one in Main will be newer because u > g
21:18 ThatGraemeGuy joined #gluster
21:18 semiosis maybe
21:18 semiosis actually i have no idea how it works
21:18 semiosis all worth a try :)
21:19 JoeJulian From what I'm reading, the ppa version will supersede the upstream version. If an upstream release is newer, ie. 2.0.1, then it will supersede that 2.0.0 ppa version.
21:22 xandrea joined #gluster
21:22 xandrea hi everyone
21:23 * JoeJulian exchanges pleasantries
21:25 _dist how come glusterbot didn't freak out?
21:25 JoeJulian "everyone"
21:25 _dist well then he's not smart enough to pick on anyone :)
21:26 tom[] joined #gluster
21:26 * _dist shudders at the matrix 2's "AI" characters
21:27 JoeJulian @mp show --id 38
21:27 glusterbot JoeJulian: The action for regexp trigger "^[Hh](i|ello)[.?!]*$" is "echo $nick: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will
21:27 glusterbot JoeJulian: eventually offer an answer."
21:29 semiosis thats right, JoeJulian will eventually offer an answer
21:29 JoeJulian hehe
21:30 recidive joined #gluster
21:31 Charles_ joined #gluster
21:31 Guest68662 hello, i have an input/output error when trying to do anything with a file, but gluster volume heal (volname) info shows no issues, and when i do an info on 'healed' it says the file with the i/o error is healed
21:37 _dist chucky_z: what are you trying to do wit hthe file?
21:37 _dist with the*
21:38 chucky_z we have an svn updater, i believe a small portion of files were updated directly on the brick
21:38 chucky_z however auto-heal seemed to mostly fix it
21:38 chucky_z im just trying to run 'svn info'
21:39 chucky_z also i may have done something potentially bad -- but it fixed it.  i deleted the file directly out of the local brick and re-ran heal full, i can run an svn info again
21:40 chucky_z heh, ok so the brick is named 'webcontent,' and running `gluster volume heal webcontent info split-brain` shows 1024 files in split brain
21:40 chucky_z any reason why 'heal full' isn't doing anything about this?
21:41 _dist if something gets written directly to a brick (sounds like 1024 files were) gluster can't keep track of who's "correct"
21:41 semiosis chucky_z: "split-brain" is when gluster *can't* do anything about it
21:42 chucky_z OK, I have two servers and I always want one of them to be the correct server, is there a resolution to this using the gluster tool, or should I do something by hand?
21:42 semiosis chucky_z: what version of glusterfs?
21:43 semiosis i'm not sure what the latest procedure is to heal ,,(split-brain)
21:43 glusterbot (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
21:43 semiosis hey that looks useful
21:43 chucky_z 3.5.2
21:43 chucky_z yes it does :)
21:44 semiosis JoeJulian: splitmount is brilliant
21:45 Pupeno_ joined #gluster
21:46 chucky_z i recently changed the way we do gluster mounts (also my first time setting up, the old admin is long gone...) becuase we *kept* running into split brain conditions, but alas while you can build a better config, you cannot build a smarter.... person.
21:49 sprachgenerator joined #gluster
21:49 xandrea we talk about glusterfs?
21:49 xandrea I have a little issue
21:49 xandrea I set the rules fot iptable
21:50 xandrea and it works well
21:50 xandrea untill I do a reboot
21:50 xandrea glusterfs does not reconnect
21:50 chucky_z xandrea: are you good with iptables?
21:50 gts joined #gluster
21:51 xandrea I need to restart the service after reboot
21:51 xandrea and then it works again
21:51 xandrea I’m a newbie hahah
21:51 xandrea I mean the iptable service
21:51 xandrea I cannot understand
21:52 semiosis xandrea: what linux distro?
21:52 caiozanolla xandrea, depending on the distro u can do a /etc/init.d/iptables save
21:52 xandrea centos 7
21:53 semiosis caiozanolla: that command will work on cent7 right?
21:53 semiosis or iirc 'service iptables save'
21:53 xandrea to restart the service “I do systemctl restart iptable”
21:53 caiozanolla dunno. it works on 6 afaik
21:54 xandrea do you think I have to save my changes?
21:55 semiosis yes
21:55 caiozanolla xandrea, you most certainly do
21:55 xandrea mmm.. I’ll serch the way
21:55 semiosis service iptables save
21:56 semiosis should do it
21:56 xandrea ok.. I’ll try thanks
21:56 semiosis yw
21:56 caiozanolla xandrea, get the rules the way you want, save then restart, it should be back the way you left after reboot
21:58 _dist does anyone know why changing the drive cache on a libgfapi vm disk image after partitioning makes the disk inaccessible unless you use the method it was created with?
21:59 aaronott left #gluster
22:01 chucky_z hm.
22:02 chucky_z so i'm following that split brain fix guide semiosis but when i re-run the heal it just puts them back into split-brain with a newer timestamp
22:02 semiosis wow
22:02 semiosis you used splitmount & deleted from one of the mounts?
22:02 chucky_z yep
22:02 _dist perhaps I didn't umount (and the cache was still present)
22:03 _dist nope, feels like this is new but I must be wrong
22:04 chucky_z i'm wondering if it's much more than 1024 files that are in split-brain, but the heal tool will only detect 1024 at a time?
22:05 chucky_z yep...
22:10 chucky_z this is well hosed for sure
22:10 chucky_z perhaps i should just recreate the volume?
22:11 _dist nm, I'm just a fool and forgot about remote-dio volume setting (which would obviously be required for cache=none)
22:20 xandrea guys… I use glusterfs to replicate my kvm vms
22:20 xandrea can you recommend the best option for performance??
22:20 JoeJulian scotch.
22:21 _dist xandrea: I'm doing performance testing right now :)
22:21 JoeJulian I'm doing performance engineering right now.
22:21 xandrea wow…
22:22 xandrea I'm not lonely…  :P
22:22 _dist I'm lazily uploaded kali so I can use palimpsest because I don't feel like interpreting bonnie or using fio right now
22:22 _dist uploading*
22:22 * _dist is so lazy he didn't change the cipher on sftp and it's taking forever
22:24 _dist but, with dd I'm getting between 400-500 sequential write (cache=none) and 600-700 sequential write (cache=writeback) on an FS with a native test around 800
22:38 xandrea _dist: can you paste your volume info ??
22:41 msmith_ joined #gluster
22:41 _dist https://dpaste.de/8MRF (it only has one brick right now) but it won't be different with 1 over 10gbe
22:41 glusterbot Title: dpaste.de: Snippet #279199 (at dpaste.de)
22:41 plarsen joined #gluster
22:41 msmith_ joined #gluster
22:43 xandrea you don’t disabled “direct-io-mode”
22:48 _dist if you do, you can't use cache=none
22:51 xandrea do you think to disable cache is better?
22:51 _dist depends, it's safer
22:52 _dist it's can be bit a slower to ALOT slower depending on your setup
22:55 xandrea is there a command to show how the default settings are set?
22:56 _dist hmm, not exactly, it's in the docs. But not all settings are in the docs
22:56 _dist there is a way to reset back to default but it still doesn't show you, gluster volume set help (might show you) but even it doesn't have all the options
22:56 _dist but it does have the ones I used
22:58 _dist I'm heading out, really looking forward to semiosis adding libvirt :) (thought I hope the xml settings for gluster are more _obvious_ than they were previously)
22:59 * _dist suspects virt-manager still won't work with libgfapi
23:00 recidive joined #gluster
23:00 xandrea do you us libgfapi to mount the volume?
23:06 _dist joined #gluster
23:07 _dist ok, I thought I was gone but... JoeJulian: I'm still seeing the healing issue on 3.4.5 :(
23:20 _dist I had a hunch that one of virt options for libgfapi might have been responsible, but I reversed them all and it's still not fixed
23:21 _dist or wait, was it only as off 3.5x that it was fixed? I'll try 3.5 now
23:31 _dist upgrade to 352 for gluster and qemu fixed it
23:31 * _dist puts virt settings back in an gets ready to retest
23:31 tryggvil joined #gluster
23:36 _dist finally, finally :)
23:36 * _dist will find out from JoeJulian tommorrow why he doesn't recommend 3.5.2 yet
23:58 sonicrose joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary