Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 diegows joined #gluster
00:27 fubada- hi purpleidea
00:27 fubada- curious to see if you had time to do a patch for the vrrp file removal
00:28 fubada- in the puppet module
00:28 fubada- thanks :)
00:28 purpleidea fubada-: i did not yet sorry, since it's not dangerous (just annoying) i've had to deal with a few other priorities first, sorry
00:28 fubada- no problemo :)
00:28 fubada- thanks anyways
00:30 MugginsM joined #gluster
00:30 purpleidea fubada-: it turns out it's a bit tricky too :( i made one attempt, but then got stuck a bit. will have to look again. sorry
00:31 purpleidea fubada-: btw what part of the world are you in?
00:31 fubada- nyc
00:31 fubada- what about you
00:31 purpleidea Canada... i was going to mention, I'll be in Europe shortly to be at some conferences. I'm giving two GlusterFS related sessions.
00:32 fubada- nice!
00:32 fubada- thats awesome dude congrats
00:33 purpleidea fubada-: oh no big deal, i've done it previously, it was a mention in case you were in one of those cities
00:34 fubada- nope. probably not any time this winter.  Are you coming to any gigs in nyc?
00:36 purpleidea not that i know of atm, but my sister lives nearby though (technically nj near the border)
00:53 dgandhi joined #gluster
00:53 gildub joined #gluster
01:00 plarsen joined #gluster
01:10 psi_ joined #gluster
02:11 shaunm joined #gluster
02:27 clane joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:56 bala joined #gluster
02:56 badone joined #gluster
03:26 hagarth joined #gluster
03:37 kshlm joined #gluster
03:38 bharata-rao joined #gluster
03:56 meghanam joined #gluster
04:05 kanagaraj joined #gluster
04:08 nishanth joined #gluster
04:12 atinmu joined #gluster
04:19 nbalacha joined #gluster
04:24 gem joined #gluster
04:24 nishanth joined #gluster
04:24 shubhendu joined #gluster
04:25 RameshN_ joined #gluster
04:35 hagarth joined #gluster
04:38 Manikandan joined #gluster
04:45 suman_d joined #gluster
04:46 anoopcs joined #gluster
04:52 nangthang joined #gluster
04:53 ppai joined #gluster
05:01 spandit joined #gluster
05:07 jiffin joined #gluster
05:08 sakshi joined #gluster
05:09 lalatenduM joined #gluster
05:09 anil joined #gluster
05:10 dusmant joined #gluster
05:17 rafi joined #gluster
05:24 pp joined #gluster
05:33 kumar joined #gluster
05:35 kdhananjay joined #gluster
05:35 karnan joined #gluster
05:38 fubada- joined #gluster
05:39 hagarth joined #gluster
05:48 dusmant joined #gluster
05:48 ndarshan joined #gluster
05:48 nishanth joined #gluster
05:51 rjoseph joined #gluster
05:52 ndarshan joined #gluster
05:55 karnan joined #gluster
05:59 overclk joined #gluster
06:00 anil joined #gluster
06:01 ppai joined #gluster
06:07 raghu joined #gluster
06:21 karnan joined #gluster
06:25 suman_d joined #gluster
06:27 suman_d joined #gluster
06:28 soumya joined #gluster
06:29 meghanam joined #gluster
06:35 deepakcs joined #gluster
06:38 nrcpts joined #gluster
06:38 nrcpts joined #gluster
06:42 stickyboy I had a brick fail; is it better to re-provision the server and bring it back up with the same name / volume id, or bring the machine up as a new name and "replace" the old brick?
06:42 atalur joined #gluster
06:43 nshaikh joined #gluster
06:56 karnan joined #gluster
06:57 mbukatov joined #gluster
07:21 atalur joined #gluster
07:26 DV joined #gluster
07:28 fandi joined #gluster
07:33 fandi joined #gluster
07:36 jtux joined #gluster
07:37 fandi joined #gluster
07:40 nishanth joined #gluster
07:43 Philambdo joined #gluster
07:48 atalur joined #gluster
07:50 ppai joined #gluster
07:51 dusmant joined #gluster
08:03 ndevos stickyboy: I would prefer to not change the hostname, and this should work for that: http://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
08:04 [Enrico] joined #gluster
08:09 aravindavk joined #gluster
08:13 deniszh joined #gluster
08:13 DV joined #gluster
08:14 stickyboy ndevos: Cool.  I've seen those docs but wasn't sure if they were current (August, 2013).  Lemme grok them.
08:16 hchiramm_ Manikandan++ gem++ sakshi++
08:16 glusterbot hchiramm_: Manikandan's karma is now 1
08:16 glusterbot hchiramm_: gem's karma is now 1
08:16 glusterbot hchiramm_: sakshi's karma is now 1
08:17 ndevos stickyboy: I think they are still current, but I did not test it with the latest releases...
08:18 stickyboy ndevos: No problem.  I'm on 3.5.x anyways, so the gap isn't as wide. ;)
08:21 Fen1 joined #gluster
08:21 booly-yam-6137 joined #gluster
08:25 Fen1 Hi ! :) I have 6 glusterfs node, each node have 4 network interface, so i've created 2 bond (bond0 : management & bond1 : data) with bond-mode balance-alb, if i understand well i should be able to use 2GB/s to each bond, but why just 1 link in the bond is used ? :(
08:26 Fen1 (Each link is 1GB/s max)
08:28 liquidat joined #gluster
08:30 fsimonce joined #gluster
08:30 mrEriksson Fen1: Well, since TCP likes it packages to arrive in order, for a single stream, you won't get more speed than what one single interface would provide
08:31 mrEriksson IIRC, there are some roundrobbin-modes available for Linux bonding, but these aren't really network friendly
08:31 stickyboy Fen1: As I learned when I deployed Gluster on 2x1GbE bonds a few years ago: Bonding increases link capacity, not individual streams. :)
08:31 Fen1 uhm.. ok, there is no solution to add the bandwith of the 2 links in the same bonding ?
08:31 stickyboy So bonding will help if you have many clients.
08:32 ckotil joined #gluster
08:32 mrEriksson Fen1: From what I understand, there is no standard that solves this in a decent way for a single stream, no
08:32 Fen1 ok thx a lot :)
08:34 anil joined #gluster
08:34 mrEriksson I bonded four ethernet-bridget DSL links a couple of years ago (quite many actually) and managed to solve this by doing a ppp multilink over them all,, Worked, but wasn't pretty at all
08:34 anoopcs joined #gluster
08:41 sputnik13 joined #gluster
08:42 atalur joined #gluster
08:45 atinmu joined #gluster
08:50 rjoseph joined #gluster
08:52 dusmant joined #gluster
08:54 shubhendu joined #gluster
08:58 hagarth joined #gluster
09:04 atalur joined #gluster
09:05 ppai joined #gluster
09:06 anoopcs joined #gluster
09:13 kdhananjay joined #gluster
09:16 soumya joined #gluster
09:17 hagarth joined #gluster
09:21 Slashman joined #gluster
09:33 sputnik13 joined #gluster
09:44 1JTAADMPP joined #gluster
09:47 spandit joined #gluster
09:48 stickyboy Man, I've had four Seagate 3TB drives fail in the last 14 days.
09:48 stickyboy Insane!
09:50 ricky-ticky1 joined #gluster
09:54 ndevos stickyboy: if all those drives come from the same batch, and they get used equally, I guess it is pretty normal
09:57 ricky-ticky joined #gluster
10:01 rjoseph joined #gluster
10:01 rgustafs joined #gluster
10:01 Philambdo joined #gluster
10:03 Fen1 Proxmox use glusterfs 3.5.2, and my servers are on Ubuntu 14.4 LTS, which version of glusterfs should i install ?
10:05 T0aD joined #gluster
10:06 Norky joined #gluster
10:07 bharata-rao joined #gluster
10:10 dusmant joined #gluster
10:10 atinmu joined #gluster
10:19 meghanam joined #gluster
10:23 Philambdo joined #gluster
10:25 sickness left #gluster
10:25 fandi joined #gluster
10:26 stickyboy ndevos: Backblaze has shown that Seagate 3.0TB drives fail at a much higher rate than Hitatchi or WD: https://www.backblaze.com/blog/hard-drive-reliability-update-september-2014/
10:27 stickyboy They have 30,000+ drives spinning in their data center. :)
10:34 soumya_ joined #gluster
10:41 hagarth joined #gluster
10:50 rjoseph joined #gluster
10:51 karnan joined #gluster
10:58 meghanam joined #gluster
11:09 ndevos stickyboy: ah, interesting
11:13 nueces joined #gluster
11:14 bala joined #gluster
11:25 anoopcs joined #gluster
11:37 booly-yam-6137 joined #gluster
11:40 soumya joined #gluster
11:42 kkeithley1 joined #gluster
11:43 nishanth joined #gluster
11:45 calum_ joined #gluster
11:46 dusmant joined #gluster
11:46 karnan joined #gluster
11:47 ndevos REMINDER: Gluster Community Bug Triage meeting starting in ~15 minutes in #gluster-meeting
11:52 ctria joined #gluster
11:53 meghanam joined #gluster
11:56 DV joined #gluster
11:59 anoopcs joined #gluster
12:00 ndevos REMINDER: Gluster Community Bug Triage meeting starting *now* in #gluster-meeting
12:01 dusmant joined #gluster
12:04 jcastillo joined #gluster
12:06 bala joined #gluster
12:08 meghanam joined #gluster
12:11 dusmant joined #gluster
12:11 stickyboy ndevos: Seeing mixed reports about replace-brick. In 3.5.x should I replace-brick commit force, or start?
12:11 rjoseph joined #gluster
12:13 ndevos stickyboy: I dont really know, maybe ask atinmu?
12:13 stickyboy k
12:14 stickyboy Or maybe I just upgrade to 3.6.1 since my storage is down for maintenance anyways. ;)
12:16 atinmu stickyboy, as per our recent release all replace brick commands except commit force is deprecated
12:16 atinmu stickyboy, I need to check the same for 3.5.x series though
12:17 pcaruana joined #gluster
12:18 stickyboy atinmu: The markdown docs on master still mention `replace-brick start`, so I was super confused.
12:19 stickyboy But it seems commit force was preferred even in 3.5.x-era mailing list discussions.
12:23 kalzz joined #gluster
12:32 diegows joined #gluster
12:35 hagarth joined #gluster
12:49 meghanam joined #gluster
12:50 RameshN_ joined #gluster
12:53 bala joined #gluster
12:58 wkf joined #gluster
13:01 Slashman_ joined #gluster
13:02 LebedevRI joined #gluster
13:02 anoopcs joined #gluster
13:04 Fen1 joined #gluster
13:07 lpabon joined #gluster
13:14 bjornar joined #gluster
13:17 nbalacha joined #gluster
13:17 rjoseph joined #gluster
13:18 coredump joined #gluster
13:22 julim joined #gluster
13:22 soumya joined #gluster
13:27 wkf joined #gluster
13:27 elico joined #gluster
13:34 vikumar joined #gluster
13:36 bennyturns joined #gluster
13:38 bene2 joined #gluster
13:41 rjoseph joined #gluster
13:43 coredump joined #gluster
13:47 bala joined #gluster
13:47 nueces joined #gluster
13:48 shubhendu joined #gluster
13:49 nueces left #gluster
14:01 badone joined #gluster
14:01 pp joined #gluster
14:05 ertyui joined #gluster
14:16 ryao joined #gluster
14:17 dgandhi joined #gluster
14:22 virusuy joined #gluster
14:22 virusuy joined #gluster
14:25 plarsen joined #gluster
14:33 mikedep333 joined #gluster
14:36 karnan joined #gluster
14:43 meghanam joined #gluster
14:44 dbruhn joined #gluster
14:46 lmickh joined #gluster
14:46 dbruhn The roadmap link on the main page is a dead link. Not sure who has access to change it
14:47 kalzz joined #gluster
14:49 lalatenduM joined #gluster
14:59 Fen1 Proxmox use glusterfs 3.5.2, and my servers are on Ubuntu 14.4 LTS, which version of glusterfs should i install ?
14:59 neofob joined #gluster
15:08 _dist joined #gluster
15:10 karnan joined #gluster
15:20 ricky-ticky3 joined #gluster
15:22 wushudoin joined #gluster
15:34 mikemol joined #gluster
15:34 bala joined #gluster
15:37 mikemol So, I have a pair of gluster bricks. All writes should go to both bricks I have one gluster client that's pulling 150Mb/s-300Mb/s over the network from one of these bricks, but I don't know why.
15:38 mikemol None of the processes on the client's computer appear to be doing much in the way of I/O.
15:40 mikemol glusterfs mount point logs on the client node don't indicate to me that there's anything ongoing. I show occasional self-heals, but I don't know how to see if one of those is happening at a given moment.
15:43 booly-yam-6137 joined #gluster
15:50 hagarth joined #gluster
15:52 _dist mikemol: What are you running on the bricks? Also, I assume you're using a replica 2 volume? mbs=mbits or mbytes?
15:52 mikemol _dist: Will get back to you; phone
15:53 tdasilva joined #gluster
15:59 semiosis mikemol: client log files are in /var/log/glusterfs/the-mount-point.lgo
15:59 semiosis log
16:11 roost joined #gluster
16:18 bennyturns joined #gluster
16:19 wkf joined #gluster
16:21 nbalacha joined #gluster
16:23 harish joined #gluster
16:25 gem joined #gluster
16:26 mikemol _dist: CentOS 6. glusterfs 3.5.2-1.el6 from glusterfs-epel repo. replica 2, yes. Mb/s is megabits/s.
16:29 mikemol Interestingly, the transfer appears to have stopped. No new log entries between when I popped in here to ask, and now.
16:31 coredump joined #gluster
16:32 calum_ joined #gluster
16:35 _dist mikemol: Could be a heal, could be a crawl (though 300mbits for crawl is quite a bit). I'd suspect it's a legimate read from one of your clients, do you have any?
16:36 mikemol _dist: It was an extended duration thing; ran at least an hour. I watched with htop on the host in question, and didn't see processes doing more than a few K/s in reads or write.
16:37 mikemol But the client in question is a VM host, and the VM guests are using virtio. So I don't know that it would necessarily show up.
16:42 bala joined #gluster
16:43 _dist mikemol: It would definitely show up, glusterfsd I believe in an iftop should show
16:44 _dist on the brick that has the IO happening, it should show up under iftop (both client and brick) and iotop on brick, reads/writes I expect under process glusterfsd
16:44 mikemol Yeah, that's how I knew to come in here; iftop gave me the port, and lsof gave me the process. But nothing appeared to be writing to the fuse filesystem on the vm host.
16:46 _dist mikemol: if you can, I strongly recommend using libgfapi instead of fuse
16:46 mikemol (To be clear, iftop and lsof on the vm host revealed that it was a gluster process that was the source of the traffic.)
16:46 smohan joined #gluster
16:46 _dist mikemol: however, that aside you should definitely see reads using "IOTOP" on the brick, not the vm host
16:46 calisto joined #gluster
16:47 mikemol _dist: In the works. But it's going to mean getting all our VM hosts updated to Cent7 first.
16:47 mikemol _dist: For sure, there was I/O going on on the brick; half a CPU core was in iowait, while the other brick wasn't in iowait at all.
16:49 _dist mikemol: does iostat or something like nmon point to your disks as the bottleneck? I wouldn't expect any iowait on 300mbits, but it does depend on your storage setup. Then again, I've not used fuse for vm hosting either
16:51 sputnik13 joined #gluster
16:54 mikemol Since the transfer is no longer active, I can't do any live analysis on it. But I can say I can easily do streaming writes and reads in excess of 300 Mbytes/s on the arrays on the bricks.
16:54 mikemol I did do a pcap dump on the network traffic a couple times briefly during the transfer, and it was primarily READ calls, with the occasional LOOKUP.
16:55 wkf joined #gluster
16:55 ikarius joined #gluster
16:56 tom[] joined #gluster
16:56 _dist mikemol: I would suspect (but can't verify without testing) fuse may be the cause of the problem. Perhaps someone else here is running VMs over fuse and can add more
16:56 mikemol k.
16:57 ikarius question about glusterfs;  can you designate a size for a gluster volume,  or is the size always automatically determined by the sizes of the backing bricks divided by the replication factor?
16:57 mikemol To reiterate, the issue was a 300Mb/s data transfer over the network that I couldn't find the cause of.
16:57 _dist I'm also wondering (off topic) if 3.6 is going to have a better heal routine for large files. I believe currently a healing VM pretty much replaces the entire file. For us this means it takes about 21 hours when we take a brick down for it to heal
16:57 _dist mikemol: I understand, unless it happens again you won't be able to review what, I think an iotop would help the most
16:57 mikemol Ew.
16:58 mikemol Separate issue, now, and one I've been trying to figure out the cause of.
16:58 ikarius … I looked through the documentation and didn't find any place it really discussed volume sizes
16:58 ikarius also, can you back more than one volume out of the same brick?
16:58 _dist ikarius: by default there really isn't a size. All the bricks just believe there is enough room, if one FS runs out the brick will likely go offline
16:59 mikemol I have two volumes served up by these bricks. One, extimages, is backed on each brick by an array of big, slow drives. The other, intimages is backed on each brick by an array of small, fast drives.
16:59 ikarius _dist: that seems…. odd.  What's gluster going to return if you have a gluster volume mounted and you issue a "df" command?
16:59 mikemol When I try to do a streaming write to to extimages, I get a good 50Mbytes/s. When I do the same streaming write to intimages, I get an average of 5Mbytes/s, and very, very stuttery on the wire.
17:00 booly-yam-6137 joined #gluster
17:00 mikemol ikarius: Yes, you can back more than one volume out of each brick. It's no big deal. We use it to differentiate between underlying storage types. (I.e. small, fast drives vs big, slow ones.)
17:01 ikarius mikemol: but if you do that, there's no real protection against one volume running a different volume out of space?
17:01 mikemol ikarius: We use a different mount point to sit under each volume.
17:01 _dist mikemol: Well, technically by gluster terms you can store more than one brick on each server or FS. Volumes are built out of bricks, a single brick isn't going to be part of more than one volume, but a single server or FS can host more than one brick which can either be part of the same volume, or multiple volumes
17:02 mikemol Thanks for the correction.
17:02 soumya joined #gluster
17:02 _dist mikemol: yeah I understand it's easy to confuse especially when you're also using it as a VM storage host and there's other stuff with that
17:02 mikemol ikarius: To take _dist
17:02 mikemol blah
17:03 ikarius _dist: looking at the docs, I don't see any differentiation between a FS and a brick
17:03 ikarius http://www.gluster.org/community/documentation/index.php/Getting_started_configure
17:03 mikemol ikarius: To incorporate _dist's correction, we have each physical server share up two bricks, each backed by a different mount point.
17:03 ikarius you simply format an xfs filesystem and then directly designate that filesystem as a brick
17:04 ikarius (in that documentation I linked) … unless I'm missing something
17:06 ikarius so, one FS per brick- is that correct?
17:06 coredump joined #gluster
17:07 mikemol ikarius: That's the sanest way to do it, from what I gather.
17:12 Telsin joined #gluster
17:20 Telsin joined #gluster
17:25 ildefonso joined #gluster
17:30 Telsin left #gluster
17:50 coredump joined #gluster
17:51 _dist ikarius: , mikemol: I was at lunch. We host our gluster bricks on ZFS, and we use a designated ZFS dataset (close to an FS) for each brick. All of our servers have one brick per volume (replica 3), but for testing I've definitely setup 3-4 bricks on a single server for the same volume. I've seen some people setup an FS and brick per drive (sort of like ceph would). Personally I wouldn't do thta with gluster because I'd
17:51 _dist rather manage drive failure with raid than gluster
17:53 ikarius _dist: sure- while you can put multiple filesystems as bricks and back a  single volume with them, I was really asking about whether you could take a single FS and host multiple bricks on it, each brick serving a different volume
17:53 Gill joined #gluster
17:53 lalatenduM joined #gluster
17:53 ikarius but it looks like that's not something gluster really supports.
17:54 semiosis ikarius: a brick is just a directory on a server.  you can definitely use different directories on the same mounted filesystem as bricks in different volumes, but it's not a great idea since they'll consume each others' space
17:55 semiosis better to slice up your physical disks using lvm and then have a mounted filesystem for each brick
17:55 ikarius semiosis: yup.
18:02 _dist ikarius: To be clear when I say FS I mean filesystem, a formatted are of storage that's something like XFS, EXT3/4 etc. You can definitely have as many bricks as will fit on a single FS as long as they are in different directories. Gluster warns you when creating a volume if you have the brick in the root of a mount point. The reason it does this is if at boot time the mount doesn't happen your mount location will
18:02 _dist still exist and end up with data being "healed" to the / directory (worst case)
18:02 ikarius _dist: eesh, good to know.
18:04 _dist ikarius: So if I share a brick directory I'll do something like /mnt/vms/gluster1 (where the gluster1 directory is created on the mounted FS. That way if the mount fails, the brick won't start)
18:05 ikarius _dist: ahh, ok, that's a useful technique
18:07 wkf joined #gluster
18:14 RameshN joined #gluster
18:32 jmarley joined #gluster
18:32 ckotil joined #gluster
18:35 T0aD joined #gluster
18:36 cfeller joined #gluster
18:59 jmills joined #gluster
18:59 glusterbot News from newglusterbugs: [Bug 1184191] DHT: Rebalance- Rebalance process crash after remove-brick <https://bugzilla.redhat.com/show_bug.cgi?id=1184191>
19:00 bennyturns joined #gluster
19:03 roost joined #gluster
19:05 cornfed78 joined #gluster
19:06 cornfed78 Hi everyone- I was wondering if I might run a question by the group.. I'm a gluster novice, and I wanted to get some input.
19:07 cornfed78 I want to setup a replicated volume, but the second node I intend to use isn't available yet, I want to add it later
19:07 cornfed78 so, is it possible to setup a gluster volume, then do a peer probe, then add another brick to start the replication?
19:07 cornfed78 https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_setting_volumes.md - that's the documentation I'm reading
19:08 cornfed78 I've read how to add a brick to a distributed volume, but I haven't read anywhere how to turn a stand-alone volume /into/ a replicated one
19:08 cornfed78 most of the documentation I've come across assumes you already have two servers..
19:09 cornfed78 i'm starting with one, want to move the data to it, then rebuild the original server as the second gluster server..
19:09 dbruhn cornfed78, you can change the replication count after the fact. I've not tested it, but have read as people have done it in here. My suggestion would be to build a couple vm's and test your scenario and make sure it works how you want it to, and how you are comfortable.
19:09 dbruhn At least those comments were true as of version 3.3/3.4
19:09 cornfed78 that's a good idea.. thanks..
19:09 dbruhn np
19:10 dbruhn Anyone in here running CTDB with NFS?
19:12 calisto joined #gluster
19:16 bennyturns joined #gluster
19:18 lkoranda joined #gluster
19:23 semiosis dbruhn: i know people have been here who use it. you might want to check the channel logs for nicks
19:25 dbruhn The reason I ask, is because almost every write up I see focuses on samba, and the NFS portion seems to be an after thought.
19:25 dbruhn Appreciate the feedback as always.
19:25 semiosis i believe ctdb comes from the samba team
19:26 semiosis you might want to try a samba channel if you have questions about it
19:26 semiosis #samba, maybe?
19:26 dbruhn I'm pretty sure it does, I've used it with gluster for samba
19:26 dbruhn maybe I am trying to solve my problem incorrectly. Trying to create HA NFS for xenserver
19:38 _Bryan_ joined #gluster
19:57 neofob joined #gluster
20:01 MugginsM joined #gluster
20:01 gildub joined #gluster
20:13 Rapture joined #gluster
20:13 n-st joined #gluster
20:27 DV joined #gluster
20:36 MacWinner joined #gluster
20:37 MacWinner hi, if I write a directtory by accident to a /brick directory that gluster is managing, is it safe to just delete teh directory?
20:37 MacWinner the /brick is mounted on a drive that gluster is using in a replica set.. my engineer created a directory there by accident
20:42 MugginsM joined #gluster
20:44 Rapture i'm currently running a distributed + replicated gluster setup. Each node has 2 bricks and I am wondering what the steps are to add another brick to each node which would increase the size
20:52 dbruhn are you adding bricks on the existing two glister servers? or are you adding new servers?
20:52 Rapture I would like to just add bricks to the existing servers
20:53 dbruhn once you have the file systems mounted and formatted and ready for use
20:53 Rapture mmhmm
20:53 dbruhn gluster volume add-brick test-volume server1:/path/to/new brick server2:/path/to/new/brick
20:53 dbruhn if that makes sense
20:53 Rapture it does
20:53 dbruhn *thumbs up*
20:54 Rapture but does the order matter? originally I did gluster volume create test-volume replica 2 server1:/brick1 server1:/brick2 server2:/brick1 server2:/brick2
20:55 Rapture would the added volume impact the ordering at all? (if that even matters)
20:55 dbruhn yes the order matters, currently you are replicating server1:/brick1 to server1:/brick2
20:56 Rapture one more question, should I stop the volume before adding?
20:56 dbruhn nope
20:56 Rapture nice
20:57 PeterA joined #gluster
20:57 dbruhn you might have to reconnect some of the clients if they don't register the new size properly
20:57 dbruhn but it's rare
20:57 Rapture I have monitoring setup for that, but good to know
21:08 primusinterpares Greetings, all.  Is there anyone around that's knowledgable on the gfapi Python bindings I could chat with?
21:09 semiosis hi
21:09 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:09 semiosis primusinterpares: ^^^
21:26 primusinterpares semiosis: thanks, I'll start specifically with "where is upstream?"  Is this still accurate? https://github.com/gluster/libgfapi-python/blob/master/doc/markdown/dev_guide.md
21:46 mbukatov joined #gluster
21:46 JoeJulian @hack
21:46 glusterbot JoeJulian: The Development Work Flow is at http://www.gluster.org/community/documentation/index.php/Development_Work_Flow
21:47 JoeJulian The github source is a mirror of gerrit.
21:47 JoeJulian Gerrit can be found at that link.
21:51 Rapture dbruhn: thanks that simple command worked perfectly
21:51 dbruhn no problem
21:53 dgandhi joined #gluster
21:58 primusinterpares JoeJulian: thanks for clarifying that.  The other language bindings on http://www.gluster.org/community/documentation/index.php/Language_Bindings don't appear to have Gerrit repos, that right?
22:02 B21956 joined #gluster
22:06 JoeJulian I haven't been following the bindings. Officially, if it's not in gerrit, then it's supposed to be the ,,(forge).
22:06 glusterbot http://forge.gluster.org
22:10 semiosis you could always use the ,,(java) library with jython :-D
22:10 glusterbot https://github.com/semiosis/glusterfs-java-filesystem
22:10 ildefonso joined #gluster
22:15 purpleidea whenever i need to interact with glusterfs in java i always use the semiosis ,,(java) library
22:15 glusterbot https://github.com/semiosis/glusterfs-java-filesystem
22:16 purpleidea semiosis: btw, still hacking on adding the lesser used java calls?
22:16 semiosis purpleidea: is that vacuously true?
22:16 semiosis or do you *actually* use it?
22:17 purpleidea semiosis: :( i'm too afraid of java to try... but a true story is i've referred more than one java + glusterfs user to your project.
22:17 semiosis purpleidea: aww thanks
22:17 purpleidea re: vacuous: good word
22:18 semiosis i dont understand (a) why redhat didn't build this, and (b) now that i've started, why no one else (like redhat) is interesting in contributing
22:18 purpleidea semiosis: you know, assuming those things are true (i'm not sure they are) i can actually get those sorts of questions answered...
22:18 semiosis but in any case, i'm using it as a pedagogical device, mentoring CS students at FIU by having them contribute to the project :)
22:20 bennyturns joined #gluster
22:21 gomikemi1e i have to blow away my gluster installation and recreate it, is deleting the content of /var/lib/gluster enough?
22:21 semiosis should be
22:21 semiosis you'll also need to twiddle some xattrs on the bricks to avoid path or a prefix error
22:21 semiosis you'll also need to twiddle some xattrs on the bricks to avoid path or a prefix of it error
22:21 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
22:21 semiosis that
22:23 gomikemi1e well, i can format the underlying ebs volumes since i will have to recreate the bricks...
22:24 gomikemi1e ...nvm
22:24 semiosis oh you mean really start over
22:25 semiosis just delete the ebs vols and make new ones
22:25 semiosis or reformat, whatever
22:25 gomikemi1e this is sad...this is easier than requesting a SG edit on my company
22:26 gomikemi1e I do think gluster should allow to reuse ports, even if you have to agree that what you are doing sucks and if you have to answer yes 10 times
22:34 rwheeler joined #gluster
22:43 uebera|| joined #gluster
22:43 uebera|| joined #gluster
22:49 JoeJulian Not sure how answering yes 10 times is easier than formatting the partition/lv/ebs.
23:02 neofob left #gluster
23:02 semiosis well gluster stores those ports somewhere, so you could just go in and meddle with them
23:06 Staples84 joined #gluster
23:18 primusinterpares semiosis/others: The steps in https://github.com/semiosis/libgfapi-jni walk through disabling security.  Is there any way to actually have secure libgfapi client access?
23:19 semiosis allowing unprivileged ports is not the same as disabling security
23:19 semiosis only accepting connections from privileged ports didnt give you any security to begin with
23:21 semiosis but to you question, is there any way to actually have secure libgfapi client access... not that I know of.  perhaps when SSL support is finalized there will be a way to set up a (cryptographically) secure client
23:21 semiosis but idk of any way to do that today
23:21 semiosis and that will be fun fun fun to implement in java :/
23:21 wkf joined #gluster
23:22 semiosis keytool & keystores & truststores OH MY
23:23 primusinterpares semiosis: point taken on unprivileged ports, and on SSL...yeah, I want someone else writing that.  :)  But is there a way to control what UID/GID is being used by the gfapi client?
23:23 JoeJulian I thought ssl client connections were done?
23:23 JoeJulian @lucky glusterfs wire encryption
23:23 glusterbot JoeJulian: http://community.redhat.com/blog/2014/05/glusterfs-3-5-unveiled/
23:23 semiosis oh wow
23:24 PeterA i m getting worser and worser in the quota mis match issue
23:24 semiosis ok there's some new functions in glfs.h i was not aware of
23:24 JoeJulian http://blog.gluster.org/author/zbyszek/
23:24 semiosis https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h#L388-393
23:24 PeterA got another mount only using only 35G but now reporting 165G used :(
23:25 semiosis JoeJulian: but i still dont see anything in there related to SSL certificates
23:26 jbrooks joined #gluster
23:26 semiosis well, gotta run
23:26 semiosis ttfn
23:26 primusinterpares semiosis: thanks for the help.
23:35 semiosis yw

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary