Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-08-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 _dist joined #gluster
00:12 _dist JoeJulian: sorry if I wasn't around for the answer earlier, but I'm still wondering why you recommend against 3.5x for live vm hosting, if I read you right previously
00:23 Pupeno joined #gluster
00:25 semiosis @seen _dist
00:25 glusterbot semiosis: _dist was last seen in #gluster 12 minutes and 19 seconds ago: <_dist> JoeJulian: sorry if I wasn't around for the answer earlier, but I'm still wondering why you recommend against 3.5x for live vm hosting, if I read you right previously
00:25 semiosis _dist: i just updated the 3.5.2 packages for wheezy on download.g.o with the libgfapi header glfs.h
00:26 semiosis @later tell kkeithley what did you have to change to make debs for debian jessie?  i'm guessing the changelog & something in the apt repo dir -- what specifically?  anything else? thx
00:26 glusterbot semiosis: The operation succeeded.
00:46 Pupeno_ joined #gluster
00:46 sputnik13 joined #gluster
00:49 recidive joined #gluster
01:01 vimal joined #gluster
01:17 bene2 joined #gluster
01:25 gildub joined #gluster
01:29 semiosis @later tell _dist i updated the 3.5.2 packages for wheezy on download.g.o with the libgfapi header glfs.h
01:29 glusterbot semiosis: The operation succeeded.
01:29 * semiosis afk
01:30 Pupeno joined #gluster
01:35 bala joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:56 sputnik13 joined #gluster
01:57 gehaxelt joined #gluster
02:03 _dist thanks semiosis
02:03 _dist I'm going to try and work with the proxmox devs to get 3.5.2 in as an option to install, worst case I'll recompile their version of qemu with 3.5.2 gluster
02:25 bharata-rao joined #gluster
02:25 Pupeno_ joined #gluster
02:26 gildub joined #gluster
02:34 RobertLaptop joined #gluster
02:35 fengkun02 joined #gluster
02:39 Lee- joined #gluster
02:40 sputnik13 joined #gluster
02:49 fengkun02 volume remove-brick start: failed: Bricks not from same subvol for replica
02:50 fengkun02 what's wrong?
02:50 topshare joined #gluster
02:55 fengkun02 subvol  is what?
02:58 topshare support subvol now?
03:00 topshare a brick after being processed by at least one translator.
03:00 fengkun02 i have a error,when i use remove-brick, error info
03:00 fengkun02 #  gluster volume remove-brick fk 10.96.33.18:/home/brick 10.96.35.33:/home/brick start
03:00 fengkun02 volume remove-brick start: failed: Bricks not from same subvol for replica
03:01 fengkun02 the subvol is what?
03:01 chuck_ joined #gluster
03:02 chuck_ hi, guys
03:02 fengkun02 Volume Name: fk
03:02 fengkun02 Type: Distributed-Replicate
03:02 fengkun02 Volume ID: 36dc3f9e-3a4c-43c6-8dbc-20bdb3d4f0bc
03:02 fengkun02 Status: Started
03:02 fengkun02 Number of Bricks: 2 x 2 = 4
03:02 fengkun02 Transport-type: tcp
03:02 fengkun02 Bricks:
03:02 fengkun02 Brick1: 10.96.45.43:/home/brick
03:02 fengkun02 Brick2: 10.96.35.33:/home/brick
03:02 fengkun02 Brick3: 10.96.33.18:/home/brick
03:02 fengkun02 Brick4: 10.96.32.59:/home/brick
03:08 sputnik13 joined #gluster
03:11 XpineX joined #gluster
03:34 haomaiwa_ joined #gluster
03:35 topshare joined #gluster
03:36 sputnik13 joined #gluster
03:53 kanagaraj joined #gluster
03:59 lalatenduM joined #gluster
04:00 meghanam joined #gluster
04:00 meghanam_ joined #gluster
04:01 itisravi joined #gluster
04:08 hchiramm joined #gluster
04:09 rastar joined #gluster
04:14 itisravi joined #gluster
04:23 haomaiwa_ joined #gluster
04:25 bmikhael joined #gluster
04:29 haomai___ joined #gluster
04:29 spandit joined #gluster
04:36 Rafi_kc joined #gluster
04:37 anoopcs joined #gluster
04:43 haomaiwa_ joined #gluster
04:50 ramteid joined #gluster
04:53 Rafi_kc joined #gluster
04:54 RameshN joined #gluster
04:56 sputnik13 joined #gluster
04:57 atinmu joined #gluster
05:02 jiffin joined #gluster
05:10 topshare joined #gluster
05:11 atalur joined #gluster
05:13 sputnik13 joined #gluster
05:21 lyang0 joined #gluster
05:22 hagarth joined #gluster
05:24 sputnik13 joined #gluster
05:25 kdhananjay joined #gluster
05:27 saurabh joined #gluster
05:30 ppai joined #gluster
05:32 fyxim_ joined #gluster
05:33 bala joined #gluster
05:34 topshare joined #gluster
05:37 harish_ joined #gluster
05:42 harish__ joined #gluster
05:45 latha joined #gluster
05:48 sputnik13 joined #gluster
05:49 bala joined #gluster
05:49 aravindavk joined #gluster
06:04 bmikhael joined #gluster
06:11 fengkun02 #  gluster volume remove-brick fk 10.96.33.18:/home/brick 10.96.35.33:/home/brick start
06:11 fengkun02 volume remove-brick start: failed: Bricks not from same subvol for replica
06:11 fengkun02 what's wrong?
06:12 dusmantkp_ joined #gluster
06:19 rgustafs joined #gluster
06:25 kumar joined #gluster
06:27 haomai___ joined #gluster
06:29 nishanth joined #gluster
06:33 nshaikh joined #gluster
06:39 HoloIRCUser1 joined #gluster
06:42 kshlm joined #gluster
06:48 raghu` joined #gluster
06:51 Thilam joined #gluster
06:58 ctria joined #gluster
06:59 andreask joined #gluster
07:03 topshare joined #gluster
07:04 siXy joined #gluster
07:11 nbalachandran joined #gluster
07:12 kanagaraj joined #gluster
07:18 karnan joined #gluster
07:21 andreask joined #gluster
07:21 hagarth joined #gluster
07:22 bharata-rao joined #gluster
07:23 aravindavk joined #gluster
07:29 ekuric joined #gluster
07:30 siXy joined #gluster
07:33 LebedevRI joined #gluster
07:36 glusterbot New news from newglusterbugs: [Bug 1123294] [FEAT] : provide an option to set glusterd log levels other than command line flag <https://bugzilla.redhat.co​m/show_bug.cgi?id=1123294>
07:36 HoloIRCUser2 joined #gluster
07:52 mbukatov joined #gluster
08:01 RameshN joined #gluster
08:04 HoloIRCUser4 joined #gluster
08:08 liquidat joined #gluster
08:19 nbalachandran joined #gluster
08:19 aravindavk joined #gluster
08:20 HoloIRCUser joined #gluster
08:21 HoloIRCUser2 joined #gluster
08:26 itisravi joined #gluster
08:34 RameshN joined #gluster
08:45 ricky-ti1 joined #gluster
08:54 HoloIRCUser joined #gluster
08:57 hagarth joined #gluster
09:22 Slashman joined #gluster
09:27 meghanam_ joined #gluster
09:27 meghanam joined #gluster
09:40 doekia_ joined #gluster
09:40 HoloIRCUser1 joined #gluster
09:44 HoloIRCUser joined #gluster
09:45 Pupeno joined #gluster
10:12 edward1 joined #gluster
10:19 andreask joined #gluster
10:22 ws2k3 hello if i would use striped replicated can i access that volume use the fuse client ? so if one machine goes down the other one is still working ?
10:29 kkeithley1 joined #gluster
10:36 glusterbot New news from newglusterbugs: [Bug 1131447] [Dist-geo-rep] : Session folders does not sync after a peer probe to new node. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1131447>
10:38 pradeepto joined #gluster
10:42 pradeepto hi, is it possible ot have a single server with a volume on it?
10:43 pradeepto sudo gluster volume create buildvol hostname:/opt/gluster-brick
10:43 pradeepto volume create: buildvol: failed: Host hostname is not in 'Peer in Cluster' state
10:44 pradeepto where hostname is something I have set in the etc/hosts and also tried with the IP address.
10:50 kkeithley1 pradeepto: it is, I do it all the time.
10:51 pradeepto hey kkeithley_  that is reassuring, but what am I missing here because it keeps failing for me.
10:51 pradeepto This is a centos 6.5 box with gluster 3.5.2
10:52 kkeithley_ what IP does it have in /etc/hosts? (You can't use 127.0.0.1.)
10:53 pradeepto I am using an AWS public IP.
10:53 ws2k3 hello if i would use striped replicated can i access that volume use the fuse client ? so if one machine goes down the other one is still working ?
10:54 pradeepto kkeithley_: my /etc/hosts file is as follows :
10:54 pradeepto x.x.x.x glusterjenkins.foosolutions.com glusterjenkins
10:54 pradeepto y.y.y.y  glusterclient.foosolutions.com glusterclient
10:55 pradeepto glusterjenkins has glusterd running on it.
10:55 kkeithley_ pradeepto: selinux and firewall are off?
10:56 pradeepto kkeithley_: there is AWS security groups, yes. But I have opened the necessary ports, as per docs.
10:56 kkeithley_ ws2k3: yes, that's the whole point of replicated. And yes, fuse client works with stripe.
10:56 HoloIRCUser2 joined #gluster
10:57 ws2k3 kkeithley so if i would have 2 machine with a stripe replicated volume if one server goes down the volume keeps working right?
10:58 pradeepto kkeithley_: so it is enabled but in permissive mode.
10:58 kkeithley_ pradeepto: I've never used AWS. Other people have, but I don't know what they've said they have to do, if anything.
10:59 pradeepto kkeithley_: sestatus
10:59 pradeepto SELinux status:                 enabled
10:59 pradeepto SELinuxfs mount:                /selinux
10:59 pradeepto Current mode:                   permissive
10:59 pradeepto Mode from config file:          permissive
10:59 pradeepto Policy version:                 24
10:59 pradeepto Policy from config file:        targeted
10:59 pradeepto (sorry about the long paste)
11:00 kkeithley_ the paste police don't show up 'til later. ;-)
11:00 pradeepto kkeithley_: ok, but I am confused here. I am on the server already and I am not peering with some other server. The volume is being created on the machine I am logged into.
11:00 kkeithley_ that should be okay
11:00 pradeepto rather, I am trying to create it there, but it fails.
11:01 kkeithley_ yes, I agree it's confusing. It should work. Ask again in an hour or two when the people who use AWS start coming online
11:01 pradeepto sure, thanks kkeithley_ , appreciate your help.
11:02 kkeithley_ and it isn't just me that uses single servers. A lot of our regression tests use single servers. (just not on AWS.)
11:03 pradeepto kkeithley_: by singler server you mean what I described right, glusterd running on only one box. And multiple clients connecting to it so that they can share the same volume.
11:03 pradeepto (that is my usecase)
11:03 kkeithley_ pradeepto: correct
11:03 pradeepto cool, glad to know that I am not alone :)
11:05 ws2k3 kkeithley so if i would have 2 machine with a stripe replicated volume if one server goes down the volume keeps working right?
11:06 kkeithley_ yes, that's the whole point of replicated.
11:07 ira joined #gluster
11:07 kkeithley_ you might want to double check about why you're using stripe though. For most people it's not what they think it is
11:11 ws2k3 okay thank you
11:29 HoloIRCUser joined #gluster
11:39 HoloIRCUser2 joined #gluster
11:43 chirino joined #gluster
11:48 lalatenduM joined #gluster
11:48 pradeepto kkeithley_: fwiw, i just created new ec2 instance with pretty much all permissions, it still fails. I will probably share the logs later when others pop up.
11:49 kkeithley_ yep
11:49 kkeithley_ there is some magic that AWS users have to do. I don't know what it is though. Sorry
12:02 rtalur_ joined #gluster
12:09 HoloIRCUser joined #gluster
12:12 qdk joined #gluster
12:14 nishanth joined #gluster
12:15 andreask joined #gluster
12:20 B21956 joined #gluster
12:21 rtalur_ joined #gluster
12:23 bene2 joined #gluster
12:23 simulx joined #gluster
12:27 hchiramm joined #gluster
12:36 caiozanolla joined #gluster
12:41 theron joined #gluster
12:42 julim joined #gluster
12:46 marbu joined #gluster
12:57 marcoceppi joined #gluster
12:57 marcoceppi joined #gluster
12:58 _Bryan_ joined #gluster
13:03 julim joined #gluster
13:07 glusterbot New news from newglusterbugs: [Bug 1131502] Fuse mounting of a tcp,rdma volume with rdma as transport type always mounts as tcp without any fail <https://bugzilla.redhat.co​m/show_bug.cgi?id=1131502>
13:12 plarsen joined #gluster
13:13 kanagaraj joined #gluster
13:14 pradeepto joined #gluster
13:14 pradeepto joined #gluster
13:16 pradeepto kkeithley_: any idea, around what time the AWS gurus arrive here? :)
13:17 kkeithley_ around about now I'd guess. People on the west coast probably aren't even awake yet. ;-)
13:21 HoloIRCUser1 joined #gluster
13:27 meghanam joined #gluster
13:27 hagarth joined #gluster
13:28 mojibake joined #gluster
13:34 meghanam_ joined #gluster
13:46 rgustafs joined #gluster
13:48 tdasilva joined #gluster
13:52 HoloIRCUser joined #gluster
14:06 rgustafs joined #gluster
14:07 sputnik13 joined #gluster
14:11 coredump joined #gluster
14:16 plarsen joined #gluster
14:21 hagarth joined #gluster
14:22 HoloIRCUser1 joined #gluster
14:27 wushudoin joined #gluster
14:28 _dist joined #gluster
14:38 bennyturns joined #gluster
14:43 jbrooks joined #gluster
14:47 VerboEse joined #gluster
14:54 ctria joined #gluster
15:01 cdhouch joined #gluster
15:03 cdhouch In a glusterfs 3.4.0 distributed replicated volume, is it normal for there to always be files to heal when I issue "gluster volume heal vmstorage info" ?  It never seems to be empty
15:05 _dist cdhouch: I'm guessing from your volume name that you're running VMs on it, there is a bug that causes this (as far as I can tell pre 3.5x) but we ran that way for 6 months, it's only cosmetic - albeit a pain when you're actually healing
15:06 cdhouch we'd like to do the rolling upgrade to 3.5
15:06 cdhouch but it says there can't be anything to heal to do it heh
15:07 cdhouch otherwise we have to take a sitewide outage
15:07 tom[] joined #gluster
15:07 _dist depending on your vm platform (for 0 outage) I recommend you perform storage migration
15:08 cdhouch Ovirt.  You mean set up the new gluster setup on new hardware and migrate the storage to it?
15:09 _dist depends on your setup, we use combination compute/storage nodes on single boxes, so we just storage migrate to local storage, stop volume, upgrade, start volume migrate back
15:10 cdhouch ah we have our gluster storage boxes seperate from our VM hosts.  So the VMs run on blades and gluster is on dell 520s via 10Gb.
15:10 ProT-0-TypE joined #gluster
15:11 HoloIRCUser joined #gluster
15:13 _dist it might be easiest to just schedule downtime for the upgrade
15:13 _dist (if you can afford to do that)
15:13 cdhouch yeah, we're looking at that with the 3 day weekend coming up
15:14 _dist I'm not confident enough to say, but as far as I know your VMs aren't actually healing, but the xattr data does think there are pending ops, I'm not sure if an in-place upgrade would be safe. I woudln't chance it myself.
15:14 _dist (by in-place I mean live)
15:14 cdhouch yeah, we have about 300 vms
15:14 cdhouch dang wish I could just break the mirror, upgrade half, then mirror, resplit, upgrade other half
15:15 _dist unfortunately that's another thing that isn't safe to do when you have a healing list...
15:15 cdhouch any way to tell one side to stop taking new connections?
15:16 _dist the safest way to do it, would be to gracefully shut down one storage node and never bring it back up as its' pre-configured self (reinstall)
15:16 _dist this is assuming your single box can handle all your load until you do that, get your upgrade done and perform migration over to the new machine
15:17 _dist BUT, I honestly don't think any version (JoeJulian correct me if I'm wrong) will be _safe_ for an in-place add-brick which is ultimately what you'll do once your second box is upgraded too
15:18 cdhouch We've got (1->2 3->4) setup.  Sounds like we'll have to take the systemwide outage
15:18 cdhouch need to get to a point where we don't need to do that any more.  We grew to needing 24x7 uptime.
15:19 _dist yeah, the only way we've been able to avoid system outages is by migrating running vms to local storage on the compute nodes while upgrading storage
15:20 cdhouch Hmm ovirt 3.5 is supposed to come out Sept 3rd.  Would probably be better to do this all at once.  Anyway thanks for the advice :)  Sorry I missed the RH Summit this year, but I'll see you guys in 2015.
15:20 cdhouch I still owe Joe a beear
15:21 _dist yeap, nothing's perfect but so far in my experience gluster is "least flawed" :)
15:21 sputnik13 joined #gluster
15:22 bala joined #gluster
15:28 HoloIRCUser joined #gluster
15:28 plarsen joined #gluster
15:30 ws2k3 hello, i'm trying to create a striped replicated volume this command i used : gluster volume create statlogs stripe 2 replica 2 transport tcp 10.1.2.7:/sdb1 10.1.2.7:/sdc1 10.1.2.8:/sdb1 10.1.2.8:/sdc1 but then it gives me a warning when i try : gluster volume create statlogs stripe 2 replica 2 transport tcp 10.1.2.7:/sdb1 10.1.2.8:/sdc1 10.1.2.7:/sdb1 10.1.2.8:/sdc1 it says multiple
15:30 ws2k3 entry which rule should i use to get the first disk of both servers in stripe and the second disk in a server is an copy of the other server's disk 1?
15:39 andreask joined #gluster
15:40 aravindavk joined #gluster
15:41 mhoungbo joined #gluster
15:50 mhoungbo joined #gluster
15:58 HoloIRCUser1 joined #gluster
16:03 ws2k3 anyone?
16:05 cdhouch Sorry I've only ever used distributed replicated, I've never used striped for anything
16:06 ctria joined #gluster
16:15 ws2k3 if i would use gluster volume create statlogs stripe 2 replica 2 transport tcp 10.1.2.7:/mnt/sdb1 10.1.2.8:/mnt/sdb1 10.1.2.7:/mnt/sdc1 10.1.2.8:/mnt/sdc1 force then 10.1.2.7/mnt/sdc1 is a replica of 10.1.2.8/mnt/sdb1 right ?
16:19 zerick joined #gluster
16:19 ninkotech joined #gluster
16:22 gmcwhistler joined #gluster
16:24 HoloIRCUser1 joined #gluster
16:27 julim joined #gluster
16:34 nishanth joined #gluster
16:34 jobewan joined #gluster
16:36 bmikhael joined #gluster
16:37 caiozanolla joined #gluster
16:41 PeterA joined #gluster
16:45 pradeepto joined #gluster
16:47 pradeepto_ joined #gluster
16:56 Thilam|work joined #gluster
16:59 HoloIRCUser joined #gluster
16:59 semiosis kkeithley_: what did you have to change to make debs for debian jessie?  i'm guessing the changelog & something in the apt repo dir -- what specifically?  anything else?
17:01 andreask joined #gluster
17:01 clyons joined #gluster
17:10 cjhanks joined #gluster
17:13 sputnik13 joined #gluster
17:17 HoloIRCUser1 joined #gluster
17:18 HoloIRCUser2 joined #gluster
17:25 ProT-0-TypE joined #gluster
17:27 Frank77 joined #gluster
17:28 dtrainor joined #gluster
17:30 theron joined #gluster
17:32 Frank77 Hello. I have got an issue with KVM/Qemu 1.4.2 and GlusterFS 3.5.2. I recently updated my GlusterFS environnement and now when I run "qm start 100" I have got the following error "failed: got timeout". I saw that I didn't have such problems with locally stored disk images. I didn't find anything with strace for the moment. (I use Proxmox 3.1-21 distrib.)
17:34 Frank77 But at least  the VM starts but I can't live migrate it. I think because of this timeout error.
17:37 semiosis _dist: ^^^
17:47 kkeithley_ semiosis: ISTR only changing the changelog
17:47 kkeithley_ let me look though, one sec
17:47 semiosis interesting.  did you test installing from the apt repo for jessie?
17:47 semiosis istr having to set up the apt repo for wheezy
17:48 kkeithley_ I did not try
17:48 kkeithley_ and nobody has complained, AFAIK
17:49 semiosis ok thx
17:49 nullck joined #gluster
17:51 _dist semiosis: I'm here :)
17:51 HoloIRCUser joined #gluster
17:52 kkeithley_ nobody has complained about it being broken or not working
17:52 _dist Frank77: I'm using something pretty similar, can you describe your setup in a little more detail?
17:53 _dist Frank77: I'm using proxmox 3.2 right now (probably going to stay on it after trying leave proxmox and realizes libgfapi support is just poor everywhere else)
17:53 andreask joined #gluster
17:54 _dist I vaguely remember issues like you're having now, I would suspect your volume settings, or disk cache settings
17:54 kkeithley_ in a quick look I don't think I changed anything except the changelog
17:55 kkeithley_ and followed your recipe
17:55 kkeithley_ I can tar up my build dir for you if you want something to compare to
17:56 andreask joined #gluster
17:57 theron_ joined #gluster
17:58 theron joined #gluster
17:58 sputnik13 joined #gluster
17:59 theron_ joined #gluster
18:01 nbalachandran joined #gluster
18:01 mojibake joined #gluster
18:06 mrkurtz__ joined #gluster
18:06 semiosis kkeithley_: thx but no thx
18:07 semiosis _dist: did you see Frank77's question?  can you answer that?
18:07 semiosis seems proxmox related, maybe?
18:08 semiosis Frank77: if you can get the glusterfs log file (no idea where to find it on proxmox, normally /var/log/gluster) and put that on pastie.org maybe we can find the problem
18:12 Frank77 ok thank you (one sec)
18:12 Humble joined #gluster
18:12 andreask joined #gluster
18:15 semiosis kkeithley_: fyi, last night i set up a github repo with the debian source packages: https://github.com/semiosis/glusterfs-debian
18:15 glusterbot Title: semiosis/glusterfs-debian · GitHub (at github.com)
18:15 sputnik13 joined #gluster
18:17 Frank77 here it is http://pastebin.com/rK0izYRu
18:17 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
18:18 semiosis Frank77: you have only one brick???
18:19 Frank77 This is a brick test. I have got also another volume with 5+5 bricks (AFR)
18:19 Frank77 But I have got the same issues (if you want the logs too ..)
18:19 sputnik13 joined #gluster
18:20 semiosis couldn't hurt
18:20 Frank77 k
18:20 mrkurtz__ anyone have any recommendations for quick overview guides targeted towards people who inherit a gluster setup all of a sudden?
18:20 semiosis there's no problem apparent in this log you posted, although your client mount is being killed
18:20 Frank77 you mean it is often disconnected ?
18:21 semiosis "received signum (15), shutting down"
18:22 semiosis hoping _dist will chime in on this, he's got experience with gluster on proxmox
18:22 Frank77 http://fpaste.org/126791/
18:22 glusterbot Title: #126791 Fedora Project Pastebin (at fpaste.org)
18:23 Frank77 k
18:24 semiosis more sigterms in that log
18:26 _dist Frank77: I'm here, but I'm going on a lot of assumptions. I'm assuming you're using glusterfs as (so qemu libgfapi not NFS), I'm assuming bricks are on separate machines, and that it's a replicate volume with the virt options applied already
18:26 Frank77 yes exactly
18:27 Frank77 but let me check the options agins
18:27 Frank77 again
18:29 _dist can you paste the task log from proxmox when the migration fails? better yet you might want to just QM yourself via cli with verbose, but I've not found that it usually provides more info
18:31 _dist Frank77: I'll be back in 15 min, gotta drive someone somewhere :) sorry about that
18:32 Frank77 ok no prob. I going to eat soon xD
18:33 Frank77 http://fpaste.org/126793/ (qm command result). But the virtual machine starts correctly. Then, when I want to live migrate it just keeps running (I let it run for one night ..).
18:33 glusterbot Title: #126793 Fedora Project Pastebin (at fpaste.org)
18:34 Frank77 I can see in the process list that a new process is created on the other node ready to start the transfered machine but the process just keep on running.
18:35 Frank77 (brb)
18:38 caiozanolla joined #gluster
18:45 decimoe joined #gluster
18:47 ws2k3 does gluster have some kind of raid5 so one volume and if one brick goes missing it can repair that data?
18:49 ws2k3 cause if i want distributed replicated or striping replicated i allways need the same amount as bricks as servers so if i wanne add a server every server need one extra brick this make scalebility a bit harder
18:51 _dist Frank77: something I do, not that it should matter all that much, is define my gluster mount as localhost but I doubt very much that's the cause of your issue
18:51 _dist if I missed it sorry, but can you paste a gluster volume info on gv2?
18:54 theron joined #gluster
18:57 Frank77 ok
18:58 Frank77 here it is. http://fpaste.org/126798/
18:58 glusterbot Title: #126798 Fedora Project Pastebin (at fpaste.org)
19:00 Frank77 there is one detail I saw in the VOL_1 log. There seems to be a version mismatch but I checked my server version and there are both the same (glusterfs 3.5.2 built on Aug  1 2014 11:12:14)
19:06 semiosis ws2k3: erasure coding is coming soon to glusterfs.  for now though just replication
19:07 sickness cool =_)
19:07 sickness but isn't the ec xlator already usable? =_)
19:07 semiosis sickness: maybe, i dont know how to use it
19:08 ws2k3 semiosis what you exacly mean with erasure coding ? is that an algaritme to repair the volume when a bricks goes down ?
19:08 semiosis yes
19:08 ws2k3 ah good to hear
19:08 ws2k3 semiosis any idea when the website will be back?
19:09 semiosis didnt know it was down until you mentioned it
19:13 _dist Frank77: is it possible your clients are mismatched?
19:14 _dist also, I just want the results to the cli command gluster volume info (not the log)
19:21 Frank77 ok
19:22 Frank77 http://fpaste.org/126805/
19:22 glusterbot Title: #126805 Fedora Project Pastebin (at fpaste.org)
19:23 qdk joined #gluster
19:25 Frank77 "glusterfs --version" gives me "glusterfs 3.5.2 built on Aug  1 2014 11:12:11" on client and server side.
19:27 _Bryan_ joined #gluster
19:29 Jamoflaw What is the current recommended release for production use?
19:31 Jamoflaw Am looking at deploying 3.5.2. Currently testing on that release but seen some users recommending 3.4
19:31 Jamoflaw Any reasons why?
19:31 plarsen joined #gluster
19:32 Frank77 For the moment I have an issue (live migrating does not work with KVM 1.4.2 / GFS 3.5.2). If you can have a try I would be interested.
19:33 _dist what does dpkg --list give you for both
19:33 _dist Jamoflaw: only in 3.5x have I seen the heal info work for VMs
19:34 _dist Frank77: that's another question, how did you get 3.5.2 on your 3.1 box? Your really should recompile over the new api source (your qemu)
19:34 _dist unless someone here can verify that isn't neccessary, but I know gluster team ppa has different qemu compiles for each
19:35 Jamoflaw joined #gluster
19:36 Frank77 I just downloaded from the debian repository. Here is my dpkg list (filtered). http://fpaste.org/126814/
19:36 glusterbot Title: #126814 Fedora Project Pastebin (at fpaste.org)
19:36 Frank77 So according to you I would need to update qemu too ?
19:38 _dist well, that's the problem. You'll probably want to stick to the same version of qemu as qm at 3.1 expects, and it'll need the custom compile options for glusterfs (which exists almost no-where)
19:39 _dist I have been trying to get them to update, here's my thread http://forum.proxmox.com/threads​/19102-Updated-gluster-possible
19:40 sprachgenerator joined #gluster
19:42 Frank77 ok thank you for the information. So it might be worth waiting a little for proxmox team to release an official package ?
19:43 _dist well you can see from my post that I'm planning to try the compile myself (actually working on it today). When they gave me that first answer a few weeks ago I wanted to give up on proxmox (that and missing dbg packages making stuck hosts impossible to use gdb on)
19:43 _dist but after trying ovirt & openstack again I'm sticking with proxmox for now
19:44 _dist our prod right now is pve 3.2 with gluster 3.4.2-1 (but this has issues, mainly the heal info). I've migrated everything off one of our nodes and am going to try to get glusterfs 3.5.2 with qemu 1.7.1 (pve 3.2 build) recompiled and see if I can get reliable qemu performance out of it
19:47 Frank77 ok. I already migrated my test environment to gfs 3.5.2. Do you know if it's possible to get back to the previous version without any problem (config file, paths, etc.). Also, I would very intersted in your tests results.
19:47 _dist I would suspect between the time qemu 1.4.2 was built and gluster 3.5.2 was put out that the api source has changed enough that qemu needs to be recompiled, but it wouldn't be too hard to compare and see
19:47 _dist Frank77: it shouldn't be too tough, just remove the gluster repo, apt-get update and do a --reinstall on glusterfs-server
19:47 Frank77 ok, thx.
19:48 _dist personally, I would stop the volumes running to do that though, but it may not be neccesary
19:48 Frank77 I'll try it on one node (client only) and see if the two version can work together
19:55 theron joined #gluster
19:58 jbrooks joined #gluster
20:02 B21956 joined #gluster
20:02 Frank77 I reinstalled gfs 3.4.5 on the client side (proxmox) and it now it works again. I let the server side with the 3.5.2 version
20:06 Frank77 I will be waiting for your tests :). I bookmarked the linked on the proxmox forum you gave me. Let us know if you have any success.
20:16 HoloIRCUser1 joined #gluster
20:25 rotbeard joined #gluster
20:32 Pupeno joined #gluster
20:49 pdrakewe_ joined #gluster
20:52 plarsen joined #gluster
20:55 caiozanolla joined #gluster
20:59 ghenry joined #gluster
21:08 Jamoflaw joined #gluster
21:09 _dist having a scary issue with peer probe, what does this mean? https://dpaste.de/MccP
21:09 glusterbot Title: dpaste.de: Snippet #279632 (at dpaste.de)
21:09 _dist why is it confused, how do I make it happy :) I'm usually less risk adverse, but these are live systems
21:11 _dist re-probing worked fine nm on that front, but if I want to reprobe a previously detached peer that doesn't seem to work. Panic over gogole search time
21:13 _dist my first guess would be that peers need to be of the same version (these aren't)
21:13 Jamoflaw joined #gluster
21:18 andreask joined #gluster
21:21 MacWinner joined #gluster
21:22 _dist looks like 3.5.2 might not be a friendly peer for 3.4.2
21:24 _dist ok, got it to work, but the 3.5.2 (from the 3.4.2's perspective) shows a port # when all the others don't, I guess I'll be "not too worried" about that
21:30 _dist storage migration from 3.4.2 client pushing to 3.5.2 server worked
21:32 _dist machine booted from OS disk on 3.4.2 serve rand storage disk on 3.5.2 server at same time (things look ok)
21:35 systemonkey joined #gluster
21:43 theron joined #gluster
21:49 _dist Frank77: I'm also getting timeouts, on all starts. But _only_ on the machine that has 3.5.2 client, the 3.4.2 clients can run vms off my 3.5.2 server just fine
21:50 _dist this might be that qemu needs to be recompiled
21:50 _dist the vm _does_ start, but after it complains
21:51 diegows joined #gluster
21:52 _dist I can migrate back to the 3.4.2 compute nodes (still running on 3.5.2 volume
21:53 _dist so at this point I'd suspect it's either a qm issue or a qemu issue
21:54 _dist (I really hope it's not QM)
22:04 _dist semiosis: I'm assuming because you did it, that each verison of gluster requires a qemu recompile on its' apir/src yes?
22:04 PeterA got an issue with database file running on gluster no able to obtain lock....
22:04 PeterA any clue??
22:05 semiosis having clients & servers the same version is a good idea, although not always necessary (hard to know when tho)
22:06 _dist not clients & servers per say (gluster client/server) but yeah I undersatnd that qemu itself is a libgfapi client
22:06 _dist I'm assuming that qemu must work with a local gluster library like common or client
22:07 _dist or I guess it might go directly against the server its' pointed at, that would also explain my situation. Perhaps a qemu recompile.. guess I'll have to make a debian compile and figure out how make dpkg ok with everything
22:07 dtrainor joined #gluster
22:09 glusterbot New news from newglusterbugs: [Bug 1131713] Port glusterfs regressions on MacOSX/Darwin <https://bugzilla.redhat.co​m/show_bug.cgi?id=1131713>
22:10 systemonkey joined #gluster
22:13 _dist anyone experienced with dependencies in debian? I want to force a new compile over top the existing one
22:17 PeterA http://pastie.org/9487402
22:17 glusterbot Title: #9487402 - Pastie (at pastie.org)
22:17 PeterA i just start getting these errors
22:17 PeterA any clue what does it means and why it happens?
22:17 PeterA i can see the actual file on the filesystem and able to open it
22:17 PeterA wonder why gluster error out?
22:18 PeterA the volume is a replica 2 3x2
22:18 PeterA xfs on ubuntu 12.04 running gluster 3.5.2
22:20 _dist PeterA: what happened to put you in this state?
22:20 PeterA we just started using the volume with NFS
22:21 PeterA and starting getting this error
22:21 PeterA server-rpc-fops.c:1535:server_open_cbk
22:21 _dist Oh, I don't have any experience using gluster and NFS on the same server, I always use a gluster client to proxy NFS and SMB
22:21 PeterA ic
22:21 PeterA me too
22:22 _dist but my guess would be the protocols aren't respecting each other? :)
22:22 _dist those IDs are most likely the gluster nodes that aren't able to access at the time
22:23 PeterA hmm…so what would preventing it
22:24 _dist lsof ? maybe something has a lock on it, I'm not adept enough to help you further to be honest :)
22:32 PeterA np thanks :)
23:12 andreask1 joined #gluster
23:14 andreask joined #gluster
23:14 Pupeno_ joined #gluster
23:19 andreask joined #gluster
23:23 PeterA it seems like application/database can not get a lock on glusterfs over NFS....
23:23 PeterA anyone experienced that?
23:30 agen7seven joined #gluster
23:56 jvandewege joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary