Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-06-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:53 VeggieMeat joined #gluster
00:53 marcoceppi joined #gluster
00:53 JordanHackworth joined #gluster
00:53 JoeJulian joined #gluster
00:56 glusterbot joined #gluster
00:58 bennyturns joined #gluster
01:15 bala joined #gluster
01:34 TaiSHi joined #gluster
01:35 stickyboy joined #gluster
01:35 TaiSHi Hi everyone, I'm evaluating if cluster would fit my needs
01:35 TaiSHi I have a set of small vms and create new ones on demand
01:36 TaiSHi My idea is to have them auto provisioned by gluster (they already install all the software I need, only lacking is website files)
01:46 ilbot3 joined #gluster
01:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:01 gmcwhistler joined #gluster
02:14 pranithk joined #gluster
02:14 pranithk Do we have anyone who uses debs for using gluster online?
02:14 Peter3 yes
02:15 Peter3 i m on ubuntu
02:15 pranithk Peter3: great!
02:15 Peter3 kinda deb :)
02:15 pranithk Peter3: Did you install 3.5.1 by any chance?
02:17 pk1 joined #gluster
02:17 pranithk Peter3: u there?
02:18 Peter3 yes i m running 3.5.1 now
02:19 pranithk Peter3: With rpm based installations the binaries are copied into /usr/sbin. Which directory is it for Ubuntu?
02:20 pranithk Peter3: I meant where are glusterfs/glusterd/glusterfsd binaries stored on Ubuntu?
02:22 Peter3 same /usr/sbin
02:22 pranithk Peter3: Could you check if it has the file glfsheal?
02:22 Peter3 no
02:23 Peter3 does not exist
02:23 pk1 joined #gluster
02:23 Peter3 does it means deb will not heal?
02:23 pranithk Peter3: No just volume heal info command doesn't work
02:23 Peter3 yes
02:23 Peter3 i filed that bug
02:23 pranithk Peter3: Hey!
02:23 Peter3 https://bugzilla.redhat.com/show_bug.cgi?id=1113778
02:23 glusterbot Bug 1113778: medium, unspecified, ---, pkarampu, ASSIGNED , gluster volume heal info keep reports "Volume heal failed"
02:23 pranithk Peter3: I am sorry I didn't recognize you :-)
02:24 Peter3 hehe :)
02:24 Peter3 that's ok
02:24 pranithk Peter3: I am pkarampu,
02:24 Peter3 i m long waiting  for that file to back in
02:24 Peter3 yes
02:24 Peter3 i know :P
02:24 Peter3 thanks for working on this
02:24 pranithk Peter3: I think we need to talk to semiosis for packaging debs with the file included.
02:24 Peter3 that would be VERY nice :D
02:25 Peter3 so that's cosmetic?
02:25 pranithk Peter3: I don't see him online though :-(
02:25 pranithk Peter3: Yes :-)
02:25 Peter3 cool
02:25 Peter3 when is 3.5.2 gonna be out?
02:25 Peter3 as i also filed another bug related to quotad :)
02:26 pranithk Peter3: I know. I fixed that as well ;-)
02:26 pranithk Peter3: you should ask ndevos who should be online after 2-3 hours.
02:26 pranithk Peter3: But this heal info issue is just debian packaging issue IMO?
02:26 Peter3 cool :D
02:27 Peter3 that actually sounds better for me as i can live with that
02:27 Peter3 we are moving production data into 3.5.1 as of now
02:27 pranithk Peter3: Got it.
02:28 Peter3 lucky not live yet :P
02:29 Peter3 can we do a diff on the binaries befween rpm and deb?
02:29 Peter3 to make sure we are not missing anymore?
02:29 pranithk Peter3: This is the only new binary in 4 years so don't worry ;-)
02:30 Peter3 o nice :D
02:30 Peter3 thanks again!!
02:31 bchilds joined #gluster
02:32 pranithk Peter3: But we still didn't find if the issue is in debian packaging or not. We need a confirmation
02:34 pranithk Peter3: If you happen to see semiosis could you ping him about that
02:35 pk1 joined #gluster
02:37 pranithk Peter3: http://review.gluster.org/6511 is the patch that added this binary to the system for 3.5. I can see the glusterfs.spec.in file in the changeset to contain the change to add this binary to the system. I wonder what ubuntu uses to build debs
02:37 glusterbot Title: Gerrit Code Review (at review.gluster.org)
02:38 TaiSHi Is it possible to use master-only glusterfs ?
02:38 TaiSHi I want to hot-plug (and unplug) masters on demand
02:39 pranithk TaiSHi: What is 'master'? Are you referring to geo-rep?
02:39 Peter3 Thanks pkarampu! that's a semiosis question on how deb made
02:39 TaiSHi pranithk: tbh, I'm not sure
02:40 TaiSHi I could explain you my case
02:40 TaiSHi (and I will :P )
02:40 pranithk TaiSHi: go ahead
02:40 TaiSHi I have 3 webservers with same data, during some events, they auto scale to 4
02:40 TaiSHi saltstack already provisions them with software
02:41 TaiSHi I now need a filesystem solution. Which also solves an issue that is: when new data is created on a front (by an user) I can't (or don't want to) wait 1 minute for it to rsync
02:41 TaiSHi Which was the previous solution (rsync back and forth, yuck)
02:43 * TaiSHi is waiting for his stupidity to sink in :P
02:43 pranithk TaiSHi: I guess all you need is a volume which has to be mounted on all the webservers.
02:44 TaiSHi Yes, but I saw there was a 'client' and a 'server'
02:44 TaiSHi Can't they all be "servers" ? (so sometime I could, say, remove webserver01 instead of 04)
02:44 pranithk Peter3: Ok this is gonna sound weird but for some reason I am facing difficulty in connecting to bugzilla. Could you update the bug you raised with the discussion we had and CC 'me@louiszuckerman.com' to the CC list.
02:45 pranithk TaiSHi: Client and server are just processes as long as you don't use nfs mounts you can have both clients i.e. mounts and servers i.e. bricks on same machine
02:46 TaiSHi pranithk: Is there any way you could point me to test that?
02:46 TaiSHi I have 3 VMs fired up right now
02:47 pranithk TaiSHi: Internet on my machine is doing wonders. At the moment all I can access is this IRC, no /google/gmail/youtube/bugzilla, did I mention google. :-(
02:47 pranithk TaiSHi: I can give you exact commands though
02:48 TaiSHi I'd appreciate that
02:48 harish_ joined #gluster
02:48 TaiSHi I have salt, but want to try it by myself first
02:48 pranithk TaiSHi: I don't know what is salt :-(. Unfortunately I can't google it either :-(
02:49 pranithk TaiSHi: first create a trusted storage pool with these 3 machines
02:49 pranithk TaiSHi: i.e. execute 'gluster peer proble VM2', 'gluster peer proble VM3' on VM1
02:49 bala joined #gluster
02:49 TaiSHi saltstack is like puppet or cheff
02:50 pranithk TaiSHi: got it
02:51 pranithk TaiSHi: I am assuming VM1/VM2/VM3 to be the hostnames of those machines. You can use their IPs also if you don't want to use hostnames.
02:52 TaiSHi Yeah
02:53 pranithk TaiSHi: Execute those commands and after that execute 'gluster peer status'. You should see the two machines you just added, they should be in 'Peer in Cluster (connected)' state. The output will tell you that
02:56 TaiSHi Ok I see both peers
02:56 pranithk TaiSHi: Same state as I mentioned?
02:56 TaiSHi Yes, connected
02:56 TaiSHi Similar output in other vms
02:56 TaiSHi (each one referencing the other 2)
02:56 pranithk TaiSHi: Does it have 'Peer in Cluster' as well? That is more important
02:57 TaiSHi State: Peer in Cluster (Connected) <- that exactly
02:57 pranithk TaiSHi: Cool
02:58 pranithk TaiSHi: Now lets create a volume. Do you want the data to be present on all the VMs?
02:58 TaiSHi Peers share data?
02:58 TaiSHi I mean, peer 2 did become aware of peer 3
02:58 TaiSHi Through peer 1
02:58 TaiSHi (which is awesome)
02:58 pranithk TaiSHi: Yes ;-)
02:59 pranithk TaiSHi: Answer my question about data...
02:59 TaiSHi Sorry, yes
02:59 pranithk TaiSHi: Do you want exact same data to be present on all the three bricks?
02:59 TaiSHi That's exactly what I want, "real-time" data replication among all VMs
03:00 TaiSHi Yes, much like a RAID1
03:00 TaiSHi Any modification/addition should be replicated so user content is available in all VMs
03:00 TaiSHi (starting to grab the concept of brick)
03:01 pranithk Taishi: See now you changed the meaning of your requirement completely ;-)
03:01 pranithk TaiSHi: let me rephrase my question
03:01 TaiSHi Go ahead
03:01 TaiSHi (sorry if my english isn't good, not my native language)
03:02 pranithk TaiSHi: Dude, don't worry about language, I am from India
03:02 pranithk TaiSHi: You?
03:02 TaiSHi Argentina
03:02 TaiSHi But your english is better than mine
03:02 pranithk TaiSHi: Sorry it sounded like a guy's name. Extremely sorry if you are a girl
03:03 TaiSHi Also it's really difficult for me to explain exactly what I want
03:03 TaiSHi I am a guy
03:03 pranithk TaiSHi: who cares about language. As long as we can understand each other, it should be fine.
03:03 TaiSHi I have my sister's voice, a guy still
03:04 pranithk TaiSHi: hehe :-). I said, dude in one of my earlier statements, so just wanted to apologize if you are not a guy. Anyways lets get back to our problem at hand
03:04 pranithk TaiSHi: Do you want the data to be stored on each of the VMs? Or is it okay if the data is present on lets say just 2 of the VMs but can be read/written from all the three VMs?
03:05 TaiSHi Present on all of them
03:06 pranithk TaiSHi: so you will have 3 copies of the same file on all the VMs. Just making sure before we move forward. Say yes and we shall move ahead
03:08 TaiSHi Yes
03:08 TaiSHi So if N-1 were to fail, data would still be present
03:09 pranithk TaiSHi: gluster volume create v1 replica 3 VM1:/path/to/dir/where/we/store/the/files/ VM2:/path/to/dir/where/we/store/the/files/ VM3:/path/to/dir/where/we/store/the/files/
03:09 pranithk execute the command above. It should ask you to start the volume 'v1' if the creation is successful
03:11 TaiSHi volume create: dalepumas: success: please start the volume to access data
03:11 TaiSHi (sorry, named it after the project)
03:12 pranithk TaiSHi: No problem sir!
03:12 pranithk TaiSHi: execute "gluster volume start dalepumas" It should say volume start successful or something like that
03:12 zerick Hi folks, which would be minimun hardware requirements for setting up GlusterFS under Ubuntu ?
03:13 TaiSHi success
03:14 TaiSHi pranithk: have to afk for 5 min, bathroom
03:15 pranithk TaiSHi: Now execute "mount -t glusterfs VM1:/dalepumas /mount/point/path" on VM1. "mount -t glusterfs VM2:/dalepumas /mount/point/path" on VM2. "mount -t glusterfs VM3:/dalepumas /mount/point/path" on VM3
03:18 pranithk Peter3: Could you please update the bug and CC semiosis
03:20 TaiSHi ok vm2 hanged up
03:20 pranithk TaiSHi: why?
03:20 TaiSHi the other 2 worked fine
03:20 TaiSHi Ran the first mount incorrectly
03:21 pranithk TaiSHi: okay... umount and mount them again then...
03:21 TaiSHi vm1 and 3 worked fine
03:21 pranithk TaiSHi: oh then just umount and mount on VM2
03:21 TaiSHi Had to reboot it, wouldn't respond
03:22 TaiSHi Nothing to worry about
03:22 pranithk TaiSHi: Okay now create data on the mount points and it should appear
03:22 TaiSHi There we go
03:22 TaiSHi Great
03:22 TaiSHi Works like a dream
03:23 TaiSHi Can I extend this cluster? or reduce it?
03:23 pranithk TaiSHi: Meaning?
03:23 TaiSHi Well say I want to add VM4 to it
03:23 TaiSHi And then remove VM1 because, say, the node is going down forever
03:24 pranithk TaiSHi: which version are you using?
03:25 TaiSHi glusterfs 3.4.2 built on Jan 14 2014 18:05:37
03:25 pranithk TaiSHi: That version can't do it correctly.
03:26 TaiSHi Would I need 3.5 ?
03:27 pranithk TaiSHi: I am just checking if the patch went in 3.5. Gimme a sec
03:27 TaiSHi Thanks
03:27 TaiSHi I can work with PPAs
03:28 pranithk TaiSHi: Unfortunately it is not release yet :-(. It is not in any release branch yet. Still in master
03:29 kanagaraj joined #gluster
03:29 pranithk TaiSHi: It needs http://review.gluster.org/7155 patch to work properly
03:29 glusterbot Title: Gerrit Code Review (at review.gluster.org)
03:30 pranithk TaiSHi: It will be part of release 3.6 which will be released in some months I think.
03:30 TaiSHi Hmm
03:30 pranithk TaiSHi: Sorry man!
03:30 TaiSHi If I shutdown VM1
03:30 TaiSHi And delete it
03:30 TaiSHi What would happen? (cluster wise)
03:30 TaiSHi Don't be sorry! You helped me out a lot
03:31 TaiSHi Seriously, thanks
03:32 pranithk TaiSHi: Nothing will happen. Things will work fine. There is this concept called self-heal. It tries to heal the data that one of the machines didnot get when the Machine was down
03:33 TaiSHi volume add-brick wouldn't help if probing a VM4?
03:33 pranithk TaiSHi: Basically we need the patch I mentioned above so that we can add/delete/ machines in any order
03:33 TaiSHi Hmm
03:33 TaiSHi Well
03:33 TaiSHi For now I will do a fixed amount of VMs
03:33 pranithk TaiSHi: With earlier releases you can only add but not remove
03:34 TaiSHi But if I delete the machine VM4
03:34 TaiSHi (i.e. which I added to the cluster)
03:34 TaiSHi I mean delete it permanently
03:34 pranithk TaiSHi: Wait thinking....
03:34 TaiSHi :D
03:34 pranithk TaiSHi: You can do this.
03:35 TaiSHi Love it when things work out :P
03:36 pranithk TaiSHi: You can keep adding the VMs. But if you want to delete a VM just don't do remove-brick. replication module can write data as long as there is at least one VM up. Once you upgrade to 3.6 you can happily remove them.
03:37 TaiSHi I'm going to test it
03:37 TaiSHi Going to kill vm3
03:37 TaiSHi And reinstall it
03:37 pranithk TaiSHi: If you delete from the last it should be fine I think
03:38 pranithk TaiSHi: If you delete from the beginning or in the middle, then there is a problem in earlier releases.
03:38 TaiSHi It's always +/- 1 VM
03:38 pranithk TaiSHi: You can add VM4 and delete VM4. No problem.
03:38 TaiSHi I'm going to do some field testing
03:38 TaiSHi Do I have to use remove-brick ?
03:38 pranithk one second
03:39 TaiSHi Thanks
03:39 pranithk TaiSHi: If you have VM1, VM2, VM3 and you remove VM1 then there could be a problem.
03:39 TaiSHi Have you ever worked in tier 1?
03:40 pranithk TaiSHi: Not really :-(
03:40 pranithk TaiSHi: I am one of the developers of gluster.
03:40 pranithk TaiSHi: I maintain replication in gluster mainly
03:41 TaiSHi You're very patient
03:41 TaiSHi I mean people-patient
03:41 TaiSHi Not many people are like that
03:41 pranithk TaiSHi: well its a weekend here and I don't really have net connection to do pretty much anything else
03:42 pranithk TaiSHi: The use case you asked for is not used by a lot of people as far as I know. So there could be some bugs lurking.
03:42 pranithk TaiSHi: It would be nice if you could test and report bugs.
03:43 pranithk TaiSHi: most of the users I talked to generally keep adding bricks and don't look back. They sometimes do replaceing of some bricks.
03:43 TaiSHi Oh just found one case I need assistance with
03:43 TaiSHi How do I reduce replica count ?
03:44 pranithk TaiSHi: for increasing replica count it is gluster volume add-brick replica-count 4 <new-vm>:/<new-path>
03:44 TaiSHi What if I add a brick and don't increase replica count ?
03:44 pranithk TaiSHi: for decreasing replica count it is gluster volume remove-brick replica-count 3 <new-vm>:/<new-path> we just added
03:45 pranithk TaiSHi: I think the cli doesn't allow that
03:45 pranithk TaiSHi: Please send mail on gluster-users mailing list for any assistance.
03:46 TaiSHi I will
03:46 pranithk TaiSHi: there are quite a few people who can help you out there. Ok?
03:46 TaiSHi For now I'm going to keep playing
03:46 TaiSHi It's almost 1 am
03:46 TaiSHi I really appreciate all your help
03:46 pranithk TaiSHi: Alright man, I gotta go now. Cya
03:46 pranithk TaiSHi: No probs
03:46 TaiSHi Take care!
03:47 pranithk TaiSHi: It will help us a lot if you report bugs. Take care! will be logging off now
03:47 TaiSHi If I encounter any, will sure do
04:01 bharata-rao joined #gluster
04:25 bala joined #gluster
04:53 davinder15 joined #gluster
05:23 lalatenduM joined #gluster
05:46 ekuric joined #gluster
06:03 rejy joined #gluster
06:03 hagarth joined #gluster
07:03 hagarth joined #gluster
07:28 Paul-C joined #gluster
07:28 ramteid joined #gluster
07:51 haomaiwang joined #gluster
08:14 hagarth joined #gluster
09:16 ninkotech_ joined #gluster
09:17 ninkotech joined #gluster
09:17 ninkotech__ joined #gluster
09:38 ctria joined #gluster
10:31 bharata-rao joined #gluster
11:01 hagarth joined #gluster
11:02 davinder15 joined #gluster
11:13 mjrosenb ok, I think I've asked this before, but what are all of the hardlinks in .gluster for?
11:14 bharata-rao joined #gluster
11:16 LebedevRI joined #gluster
11:41 bene2 joined #gluster
11:52 rwheeler joined #gluster
11:56 hagarth joined #gluster
12:13 klaas joined #gluster
12:13 gmcwhistler joined #gluster
12:42 bharata-rao joined #gluster
12:53 hchiramm_ joined #gluster
13:12 simulx joined #gluster
13:27 gmcwhistler joined #gluster
13:51 diegows joined #gluster
13:52 diegows joined #gluster
13:53 diegows joined #gluster
14:17 jcsp left #gluster
14:37 gmcwhistler joined #gluster
14:44 theron joined #gluster
15:09 Ark joined #gluster
15:31 firemanxbr joined #gluster
15:33 firemanxbr joined #gluster
15:35 n0de joined #gluster
15:36 gmcwhistler joined #gluster
15:41 theron joined #gluster
15:48 diegows joined #gluster
16:24 Intensity joined #gluster
16:39 theron joined #gluster
16:51 daMaestro joined #gluster
16:51 hchiramm_ joined #gluster
17:20 theron joined #gluster
17:44 kanagaraj joined #gluster
18:20 pureflex joined #gluster
18:50 daMaestro joined #gluster
19:23 tjikkun_ joined #gluster
19:28 ghenry joined #gluster
19:36 coredump joined #gluster
19:55 pureflex joined #gluster
20:49 [o__o] joined #gluster
21:12 pureflex joined #gluster
21:15 firemanxbr joined #gluster
21:21 ira joined #gluster
22:29 haomaiw__ joined #gluster
22:43 firemanxbr joined #gluster
23:00 pureflex joined #gluster
23:21 firemanxbr joined #gluster
23:26 calum_ joined #gluster
23:35 sonicrose joined #gluster
23:35 sonicrose warning... gluster 3.5.1 corrupted all my VMs :(
23:36 sonicrose im restoring from backups back onto 3.4.4
23:36 sonicrose for some reason for any VM that has a VHD file > 32GB as soon as i write to it gluster shows the file becomes 2.1TB in size and i can't write
23:37 sonicrose all was good until i made snapshots of my VMs, now they all have these 2.1TB VHD files and they dont boot
23:37 sonicrose the ones that had 8GB drives work ok, and i tested, if i create a VHD file say 64GB, as soon as i write it becomes 2.1TB and I get a Stale File Handle error
23:40 sonicrose doesnt occur on 3.4.4.2
23:40 sonicrose anyoneelse seen this yet?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary