Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 plarsen joined #gluster
00:46 DV joined #gluster
01:00 tg2 you iscsi then mdadm on top of that to make a big array? ;|
01:31 balacafalata-bil joined #gluster
01:32 balacafalata joined #gluster
02:05 lnr joined #gluster
02:05 lnr left #gluster
02:20 harish joined #gluster
02:20 Bhaskarakiran joined #gluster
02:26 nangthang joined #gluster
02:33 hchiramm_ joined #gluster
02:52 nangthang joined #gluster
02:56 bharata-rao joined #gluster
03:04 firemanxbr joined #gluster
03:05 firemanxbr semiosis, hey guy
03:05 firemanxbr semiosis, I restored my backup, but my gluster don't up
03:06 firemanxbr semiosis, http://ur1.ca/k554a
03:06 firemanxbr I make backup using 'dd' command
03:06 firemanxbr and mount and restore backup using tar --selinux --xattr --acl
03:07 firemanxbr but don't return
03:24 Bhaskarakiran joined #gluster
03:39 poornimag joined #gluster
03:56 RameshN joined #gluster
03:58 kumar joined #gluster
03:59 Bhaskarakiran_ joined #gluster
04:00 Bhaskarakiran joined #gluster
04:00 ira joined #gluster
04:06 T3 joined #gluster
04:13 kdhananjay joined #gluster
04:24 nbalacha joined #gluster
04:25 nishanth joined #gluster
04:28 atinmu joined #gluster
04:30 RameshN joined #gluster
04:31 thangnn_ joined #gluster
04:33 spandit joined #gluster
04:34 schandra joined #gluster
04:36 lalatenduM joined #gluster
04:38 pppp joined #gluster
04:39 T3 joined #gluster
04:45 ndarshan joined #gluster
04:47 Anjana joined #gluster
04:49 dusmant joined #gluster
04:50 soumya joined #gluster
04:50 thangnn_ joined #gluster
04:55 meghanam joined #gluster
04:59 kanagaraj joined #gluster
05:06 nshaikh joined #gluster
05:10 kshlm joined #gluster
05:13 rafi joined #gluster
05:21 vimal joined #gluster
05:21 ppai joined #gluster
05:22 glusterbot News from newglusterbugs: [Bug 1210557] gluster peer probe with selinux enables throws error <https://bugzilla.redhat.com/show_bug.cgi?id=1210557>
05:22 atrius joined #gluster
05:28 atalur joined #gluster
05:30 lyang0 joined #gluster
05:39 jiffin joined #gluster
05:43 anrao joined #gluster
05:43 nbalacha joined #gluster
05:43 hagarth joined #gluster
05:44 overclk joined #gluster
05:48 Anjana joined #gluster
05:50 smohan joined #gluster
05:52 glusterbot News from newglusterbugs: [Bug 1210568] [GlusterFS 3.6.2 ] Brick goes down if there is incorrect SSL certificates are installed on the server nodes <https://bugzilla.redhat.com/show_bug.cgi?id=1210568>
05:54 jaank joined #gluster
05:54 kaushal_ joined #gluster
05:56 ppai joined #gluster
05:59 karnan joined #gluster
06:00 Bhaskarakiran joined #gluster
06:01 anrao joined #gluster
06:06 Anjana joined #gluster
06:09 nangthang joined #gluster
06:14 gem joined #gluster
06:21 kdhananjay joined #gluster
06:22 glusterbot News from newglusterbugs: [Bug 1206461] sparse file self heal fail under xfs version 2 with speculative preallocation feature on <https://bugzilla.redhat.com/show_bug.cgi?id=1206461>
06:24 gem joined #gluster
06:26 harish joined #gluster
06:27 jtux joined #gluster
06:27 kshlm joined #gluster
06:36 Guest71069 joined #gluster
06:37 aravindavk joined #gluster
06:37 anil joined #gluster
06:39 gem_ joined #gluster
06:46 micu joined #gluster
06:47 micu joined #gluster
06:49 jtux joined #gluster
06:53 poornimag joined #gluster
07:02 [Enrico] joined #gluster
07:04 raghu joined #gluster
07:11 lyang0 joined #gluster
07:14 papamoose joined #gluster
07:15 RameshN joined #gluster
07:16 ghenry joined #gluster
07:16 ghenry joined #gluster
07:18 semoule joined #gluster
07:19 smohan joined #gluster
07:29 nshaikh joined #gluster
07:30 soumya joined #gluster
07:33 pppp joined #gluster
07:35 ctria joined #gluster
07:38 nangthang joined #gluster
07:41 poornimag joined #gluster
07:43 fsimonce joined #gluster
07:47 Pupeno joined #gluster
07:51 andreask joined #gluster
07:52 gem joined #gluster
07:57 liquidat joined #gluster
07:59 ktosiek joined #gluster
08:02 Slashman joined #gluster
08:08 gem joined #gluster
08:11 hagarth joined #gluster
08:14 purpleidea fubada: reviewed :) good idea but needs a few fixes, thanks
08:20 Norky joined #gluster
08:26 thangnn_ joined #gluster
08:37 atalur joined #gluster
08:38 thangnn_ joined #gluster
08:46 smohan joined #gluster
08:51 T0aD joined #gluster
08:53 glusterbot News from newglusterbugs: [Bug 1210627] glusterd ping-timeout value available in glusterd statedump is incorrect <https://bugzilla.redhat.com/show_bug.cgi?id=1210627>
08:53 glusterbot News from newglusterbugs: [Bug 1210629] [GlusterFS 3.6.2 ] Gluster volume status shows junk characters even if  volume exists <https://bugzilla.redhat.com/show_bug.cgi?id=1210629>
08:53 glusterbot News from resolvedglusterbugs: [Bug 1138897] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=1138897>
09:02 cornusammonis joined #gluster
09:14 rolfb joined #gluster
09:22 poornimag joined #gluster
09:29 ppai joined #gluster
09:34 gem joined #gluster
09:40 gem_ joined #gluster
09:43 harish_ joined #gluster
09:51 smohan_ joined #gluster
09:54 anil joined #gluster
10:03 hagarth joined #gluster
10:03 nshaikh left #gluster
10:10 gem_ joined #gluster
10:11 nshaikh joined #gluster
10:19 dusmant joined #gluster
10:22 ndarshan joined #gluster
10:23 pierre31 joined #gluster
10:24 pierre31 hi
10:24 glusterbot pierre31: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:24 pierre31 I am planning to install glusterfs on 2 allready rsync'ed data server
10:25 pierre31 Should I do something specific to keep the data ? And avoid resynchronizing them after glusterfs install ?
10:26 badone_ joined #gluster
10:30 pierre31 So to summarize  I have server1 and server2, each having 70Tbytes of data and I would like to install glusterfs, as the data are already replicated, is there way to install glusterfs on these data server, with the existing data on both ?
10:41 ppai joined #gluster
10:46 ndarshan joined #gluster
10:50 dusmant joined #gluster
10:52 T3 joined #gluster
10:56 LebedevRI joined #gluster
10:58 sbonazzo joined #gluster
10:59 sbonazzo Hi, just a quick heads up on http://download.gluster.org/pub/gluster//glusterfs/nightly/glusterfs/ missing fedora 22 repo. It has rawhide repo but rawhide is now on fedora 23
11:01 firemanxbr joined #gluster
11:10 gem_ joined #gluster
11:13 nshaikh joined #gluster
11:17 anil joined #gluster
11:20 ppai_ joined #gluster
11:22 Prilly joined #gluster
11:24 glusterbot News from newglusterbugs: [Bug 1210684] BitRot :- scrub pause/resume should give proper error message if scrubber is already paused/resumed and Admin tries to perform same operation <https://bugzilla.redhat.com/show_bug.cgi?id=1210684>
11:24 glusterbot News from newglusterbugs: [Bug 1210686] DHT :- core was generated while running few test on volume <https://bugzilla.redhat.com/show_bug.cgi?id=1210686>
11:24 glusterbot News from newglusterbugs: [Bug 1210687] BitRot :- If scrubber finds bad file then it should log as a 'Error' in log not 'Warning' <https://bugzilla.redhat.com/show_bug.cgi?id=1210687>
11:24 glusterbot News from newglusterbugs: [Bug 1210689] BitRot :- Files marked as 'Bad' should not be accessible from mount <https://bugzilla.redhat.com/show_bug.cgi?id=1210689>
11:24 gildub joined #gluster
11:26 gem__ joined #gluster
11:35 Anjana1 joined #gluster
11:42 Anjana joined #gluster
11:54 glusterbot News from newglusterbugs: [Bug 1210690] BitRot :- changing log level to DEBUG doesn't have any impact on bitrot log files (scrub/bitd logs) <https://bugzilla.redhat.com/show_bug.cgi?id=1210690>
11:54 glusterbot News from newglusterbugs: [Bug 1210696] BitRot :- bitrot logs don't have msg-id in logs <https://bugzilla.redhat.com/show_bug.cgi?id=1210696>
11:54 T3 joined #gluster
12:06 RameshN joined #gluster
12:24 glusterbot News from newglusterbugs: [Bug 1210712] nfs-ganesha: ganesha-ha.sh teardown leaves the /var/lib/nfs symlink as it is. <https://bugzilla.redhat.com/show_bug.cgi?id=1210712>
12:26 gildub joined #gluster
12:35 nangthang joined #gluster
12:37 shaunm joined #gluster
12:43 theron joined #gluster
12:47 oxae joined #gluster
12:55 T3 joined #gluster
12:56 twisted` hey I still didn't resolve my problem with lockfile creation :(
12:56 twisted` I get: lockfile creation failed: Value too large for defined data type
12:56 wkf joined #gluster
12:57 sbonazzo left #gluster
12:57 firemanxbr hey guys
12:57 firemanxbr I have new problem with my gluster cluster
12:58 firemanxbr my files is very stranged, example:
12:58 firemanxbr http://ur1.ca/k59mm
12:58 firemanxbr this files is change all time
12:59 firemanxbr anyone idea about this problem ?
13:00 firemanxbr more informations about my volume: http://ur1.ca/k59n0
13:00 firemanxbr anyone help me ?
13:01 B21956 joined #gluster
13:01 kanagaraj joined #gluster
13:11 bennyturns joined #gluster
13:22 hchiramm joined #gluster
13:29 asmarre joined #gluster
13:30 asmarre Hi, today I had a brick disconnected wich ping timer expired from several clients (gluster 3.6.2)
13:31 coredump joined #gluster
13:31 dgandhi joined #gluster
13:31 asmarre it wasn't a network issue because I found same error in the glustershd.log in the same machine
13:33 asmarre etc-glusterfs-glusterd.vol.log has this entries:
13:33 asmarre [2015-04-10 07:13:45.073010] W [rpcsvc.c:254:rpcsvc_program_actor] 0-rpc-service: RPC program not available (req 1298437 330)
13:33 asmarre [2015-04-10 07:13:45.081227] E [rpcsvc.c:544:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
13:33 asmarre I've solved this restarting all gluster* process
13:34 asmarre any hint?
13:36 theron joined #gluster
13:37 coredump joined #gluster
13:42 lifeofguenter joined #gluster
13:44 coredump joined #gluster
13:51 ekuric joined #gluster
13:56 T3 joined #gluster
13:57 mike25de joined #gluster
13:58 mike25de hi all... i am reading a bit about gluster
13:58 mike25de my question is ... if i can add an existing folder from a ext4 partition to a gluster pool
13:58 mike25de i can not add new hdds to the server and i can't change a lot of things
14:07 hagarth joined #gluster
14:13 hamiller joined #gluster
14:18 ninthBIT joined #gluster
14:20 ninthBIT with AWS announcing a new service called Elastic File System http://aws.amazon.com/efs/ any guesses to the tech they are using?  speculating here because I can't find public information and we use glusterfs........ :)
14:21 wushudoin joined #gluster
14:21 hagarth ninthBIT: I wish they are using glusterfs too :)
14:24 glusterbot News from newglusterbugs: [Bug 1210775] [RFE] GFAPI should have a mechanism to receive and return xdata from/to application <https://bugzilla.redhat.com/show_bug.cgi?id=1210775>
14:24 ninthBIT i am hoping to find out what the state of the files between availability zones.  the AWS EFS can span availability Zones but if they use a replication method then file version lag issues will appear.  A gluster solution spanning availability zones helps us minimize this issue.  possilby anti-pattern but beats those replication system solutions
14:26 hagarth ninthBIT: yes, details are far and few. not sure if EFS works on instance storage / ebs etc. Hence even durability guarantees are not clear.
14:28 mike25de does anyone has an idea about my stupid question?
14:28 mike25de do i need an extra partition for gluster? or i can add bricks on the existing ext 4 partition
14:28 mike25de thanks in advance
14:29 hagarth mike25de: you can certainly add a folder from an existing partition as a brick
14:30 mike25de hagarth: thanks man! that's what i wanted to hear.... so i can start using gluster :)))
14:48 coredump joined #gluster
14:57 T3 joined #gluster
14:57 coredump joined #gluster
14:58 semiosis mike25de: you really should test it before going to production.  add-brick requires rebalance, which is expensive.
15:01 rafi joined #gluster
15:01 meghanam joined #gluster
15:03 nangthang joined #gluster
15:03 coredump|br joined #gluster
15:05 rotbeard joined #gluster
15:06 coredump joined #gluster
15:12 neofob joined #gluster
15:14 mike25de semiosis:   what do you mean?
15:15 mike25de semiosis: i have 2 servers that are ext4 ... and in production. I have to sync some folder data between them... i used rsync before... but i want something "better"if possible
15:16 mike25de that's why i asked if i can just create a brick on an existing filesystem
15:16 T3 joined #gluster
15:17 hagarth mike25de: do you have both folders already populated with identical data?
15:17 mike25de hagarth:  yeah
15:17 semiosis try what you're thinking about on a couple of virtual machines and make sure it works like you want it to, before doing it on your production servers
15:17 mike25de semiosis: sure.. i will test with some vms indeed
15:17 hagarth mike25de: you wouldn't have any of the gluster metadata set on the existing data
15:18 mike25de but is correct... it should work right?   or gluster is meant to work only with xfs ?
15:18 semiosis ext4 is ok, but you might run into other issues
15:18 mike25de hagarth:  ok... now i am confused.. i need to read more about gluster
15:18 hagarth you are probably better off creating a single brick volume .. ensure that all the gluster metadata (extended attributes) are set and then add one more brick to form a replicated set.
15:18 semiosis mike25de: read, but also try :)
15:18 mike25de other issues like? ... semiosis
15:19 mike25de hagarth: thanks for the tip.
15:19 semiosis like what hagarth said
15:19 mike25de :) thanks guys for your input
15:25 semiosis yw
15:25 jobewan joined #gluster
15:26 lpabon joined #gluster
15:30 semoule joined #gluster
15:33 semoule joined #gluster
15:34 bennyturns joined #gluster
15:38 semoule joined #gluster
15:42 Prilly joined #gluster
15:47 semoule joined #gluster
15:48 nbalacha joined #gluster
15:50 Anjana left #gluster
15:50 wkf joined #gluster
16:04 hagarth joined #gluster
16:07 Asako joined #gluster
16:12 hellomichibye joined #gluster
16:13 hellomichibye hi. I initate a volume heal. I can watch the status of files ar ebeing transfered by gluster volume heal volume0 info
16:14 hellomichibye but how can I know if the heal has finished?
16:14 lexi2 joined #gluster
16:15 hellomichibye the reason I am asking is: I creted a bash script that does all the magic. I also have a cronjob for backup. but I should not activate the backup job before the heal is finished. otherwise I will delete a lot of data from the backup
16:16 coredump joined #gluster
16:20 mike25de hellomichibye: and the script is still running?
16:21 hellomichibye the script does this:
16:21 hellomichibye gluster peer probe
16:21 hellomichibye gluster volume sync
16:22 hellomichibye setfattr -n trusted.glusterfs.volume-id -v ...
16:22 hellomichibye gluster volume heal volume0 full
16:22 hellomichibye and now it should wait until heal is complete
16:22 hellomichibye to add a line to crontab
16:22 mike25de but while the healing is done.. the script is also pending ... is not exiting, right?
16:23 mike25de the script ends when the healing is done, correct?
16:23 hellomichibye I can also add the check to the script that is triggert by cron
16:23 hellomichibye and this script cheks if heal is complete or otherise aborts
16:23 mike25de you might do a pid check or a lock file.
16:23 mike25de when the script is running .. it creates a lock file
16:23 hellomichibye you mean the heal script ?
16:24 mike25de the next cron run.. it checks if the lock file exists... and exists if that lock exists
16:24 mike25de yes the heal script.. i suppose is a bash script
16:24 hellomichibye so maybe you got me wrong
16:24 hellomichibye i „start“ gluster heal by gluster volume heal volume0 full
16:24 mike25de yeah... but you start that from a bash script?
16:25 hellomichibye is there a way to ask gluster if the heal is done
16:25 hellomichibye yes
16:25 hellomichibye but gluster volume heal volume0 full returns within a few seconds. the heal is going on in background
16:25 mike25de i am not sure about gluster - i am new to this channel... BUT your bash script i suppose is waiting.. and pending while the healing is done
16:25 mike25de ah ok - i didn't know that
16:26 mike25de good to know :)
16:26 mike25de then... we need to find out if the healing process ... leaves some logs.. or if we can query the status somehow.
16:26 mike25de sorry for not understanding the issue
16:27 hellomichibye there is gluster volume heal volume0 info
16:27 mike25de i am learning about gluster myself
16:27 hellomichibye it streams the files that are healed
16:27 hellomichibye to stdout
16:28 hellomichibye but between heal is started and the stream of files is printed to stdout it just returns within a few seconds
16:28 hellomichibye so I can’t use it to test if the heal is done
16:28 hellomichibye but I will have a lock at the log files. theres maybe something is to grep :)
16:29 mike25de yeah that.. might work
16:29 mike25de have a look :)
16:29 hellomichibye there is a /var/log/glusterfs/glfsheal-volume0.log
16:30 hellomichibye I will wait until the heal is completed and check if something useful is in there
16:30 hellomichibye thx for your help!
16:30 mike25de hellomichibye: worst case scenario .. check the last timestamp of the file
16:30 mike25de of the log file
16:31 mike25de which is not the best option.. but still... might work on some instances
16:33 T3 joined #gluster
16:43 NuxRo guys, I need to copy a lot of files from one distributed/replicated volume to a distributed volume - what would be the most efficient way of doing it? I know rsync generates a lot of stat calls - any alternatives?
16:51 ira joined #gluster
16:52 Rapture joined #gluster
16:53 victori joined #gluster
16:53 _shaps___ joined #gluster
16:55 T3 joined #gluster
16:57 virusuy joined #gluster
16:57 virusuy joined #gluster
16:59 alpha01 joined #gluster
17:00 atalur joined #gluster
17:05 RameshN joined #gluster
17:06 _ndevos [A[A
17:06 _ndevos [A[A
17:09 sage joined #gluster
17:33 hellomichibye joined #gluster
17:45 theron_ joined #gluster
17:59 oxae joined #gluster
18:00 ira joined #gluster
18:02 ira joined #gluster
18:03 lalatenduM joined #gluster
18:10 Ara4Sh joined #gluster
18:11 theron joined #gluster
18:14 soumya joined #gluster
18:18 Ara4Sh_ joined #gluster
18:29 harish_ joined #gluster
18:33 Ara4Sh joined #gluster
18:41 Ara4Sh joined #gluster
18:41 _Bryan_ joined #gluster
18:46 Ara4Sh joined #gluster
18:48 Ara4Sh joined #gluster
18:53 tg2 efs looks interesting
18:53 tg2 $0.30/GB-month
18:53 tg2 lel
18:54 hchiramm_ joined #gluster
18:54 tg2 mike25de, you can do something simple
18:54 tg2 say your date is in server01:/mydatafolder/*
18:55 tg2 and your other server (syncd) is in server02:/mydatafolder/*
18:55 tg2 create (on the same mount) a new folder
18:55 tg2 server01:/s01b01 + server02:/s02b01
18:56 tg2 add those to a new gluster pool, they will be size 0 bytes
18:56 tg2 add them as replicas, gluster probe etc
18:56 tg2 you now have empty gluster volume
18:56 tg2 mount the volume on 1 host
18:57 tg2 mv the files into the volume from the underlying fs
18:57 tg2 and get rid of the second copy on the second server
18:57 tg2 then  as you copy files into the mount, it will replicate it onto the other server and create the extra data that gluster uses, while it's copying into the mountpoint, filling up the disk, it is also removing the originals so the space used is the same
18:58 tg2 if you have enough space on your underlying mounts to accomodate 2x your data, then keep the data on the second server and copy it into the mount vs moving
19:00 tg2 another approach is to create a single brick volume on the first server, do the MV, then check that your data is intact, once confirmed, create a replica on the second server and let gluster do the copying to that server (you would remove the second set of data on server02 prior)
19:41 harish_ joined #gluster
19:45 roost joined #gluster
19:56 glusterbot News from newglusterbugs: [Bug 1093692] Resource/Memory leak issues reported by Coverity. <https://bugzilla.redhat.com/show_bug.cgi?id=1093692>
20:26 DV joined #gluster
20:28 NN_Tony joined #gluster
20:32 harish_ joined #gluster
20:34 jackdpeterson joined #gluster
21:14 schism joined #gluster
21:15 schism semiosis: I guess I have an old link for your ppa, any chance you can point me to 3.2.7 precise debs?
21:26 tessier JoeJulian: gluster is working great, thanks for the help yesterday. Saw your blog post about performance optimizations. Totally understand about optimizing closer to the web app or whatever via caching, php tuning, varnish, etc. But it seems like using gluster to host VM images (my primary use case) might be a special case. Is there any tuning recommended for reading/writing random within one giant file?
21:28 JoeJulian Nope
21:29 JoeJulian That's a very general case and works best with the defaults (generally).
21:29 JoeJulian The best thing to do for VMs is to use the libgfapi interface instead of going through a fuse mount.
21:30 JoeJulian @ppa
21:30 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
21:30 JoeJulian schism: ^
21:30 schism JoeJulian: 2.3
21:30 schism err,3.2
21:30 JoeJulian Nope. Not going to be able to find that.
21:31 schism welp, 'http://ppa.launchpad.net/semiosis/glusterfs-3.2/ubuntu'
21:31 schism that's what I have, and sadly, I don't have the debs
21:32 schism ii  glusterd                         3.2.7-1ubuntu~ppa1~precise1      clustered file-system (server package)
21:32 schism ii  glusterfs-client                 3.2.7-1ubuntu~ppa1~precise1      clustered file-system (client package)
21:32 schism ii  libglusterfs0                    3.2.7-1ubuntu~ppa1~precise1      GlusterFS libraries and translator modules
21:32 JoeJulian iirc semiosis said something about archives once. No clue where those would be on launchpad.
21:33 JoeJulian I would present this to management as a good reason to upgrade to a supported version.
21:33 schism and as much as I'd love to upgrade, that's not in the cards until i can make the new dev env that won't stand up match prod
21:35 JoeJulian Bummer. I've been there myself.
21:35 JoeJulian I went with the new dev env option.
21:35 JoeJulian Currently in beta.
21:36 schism this is the new dev env, and they want it up matching stuff so they can work on upgrades.
21:37 schism and a google search for glusterd-3.2.7-1ubuntu~ppa1~precise1 gives semiosis's old ppa that 404s now.
21:37 JoeJulian Yeah, I tried the same thing.
21:37 JoeJulian Same for the +archive version.
21:37 schism they're not in /var/cache/apt either
21:39 tessier JoeJulian: Well great, defaults means no tweaking which is less work for me. Thanks!
21:41 JoeJulian schism: You might be able to tweak these directions to build your own: http://irclog.perlgeek.de/gluster/2013-01-02#i_6290324
21:42 schism Trying to match semiosis's build from source is my absolute last ditch choice
21:42 JoeJulian obviously you won't be able to grab the correct version debian tarball, but if you grab a newer one you can probably figure out how to edit it to work.
21:43 schism oh! that's way better than just the sources! thanks!
21:43 roost joined #gluster
21:43 schism I just cant believe there's only 4 results in google...
21:45 schism I have a little time to see if he responds, I'll keep this session up in case
21:45 JoeJulian @learn old ppa as If you absolutely must use a no-longer supported version, you might be able to build your own deb by extrapolating from these instructions: http://irclog.perlgeek.de/gluster/2013-01-02#i_6290324 . Obviously you won't be able to grab the correct version debian tarball, but if you grab a newer one you can probably figure out how to edit it to work.
21:45 glusterbot JoeJulian: The operation succeeded.
21:45 JoeJulian (becoming a FAQ)
21:46 JoeJulian I'm amazed at how many people use really old stuff and don't keep their own repo for it.
21:46 semiosis gah
21:47 schism heh
21:47 semiosis grumble grumble woodwork grumble
21:47 * semiosis feels like the technical debt collector these days
21:48 schism if I'd set up the cookbook, I would have set up an internal repo so i could pick when things changed, in fact I've already set up an internal repo
21:48 JoeJulian semiosis: imho, you should just write a blog article on how to do it and leave the legacy stuff behind.
21:48 semiosis JoeJulian: that, or people should just upgrade
21:48 JoeJulian :D
21:49 schism i've been at this company 2.5 months. I'll get there.
21:49 JoeJulian He's working on it, just has to duplicate his prod so he can figure out how to use chef (shudder) to upgrade it all.
21:49 semiosis ouch
21:50 schism I actually floated the idea of upgrading the existing dev/staging/production as my first choice and the director said no.
21:50 JoeJulian Which means he'll be a really good ruby programmer (yeah, oxymoron, I know) and realize that chef's just getting in the way of writing a ruby script to do it all.
21:51 schism hey, be nice, I speak cfengine/puppet/chef, and working on salt now :)
21:51 JoeJulian That's when you tell the director that "we need to reevaluate your decision. The packages are no longer available to do it your way."
21:52 schism if only
21:52 JoeJulian That's what I would do.
21:52 JoeJulian But then, I've always been pretty outspoken.
21:53 schism yeah, not gonna get traction because of how behind the new dev env standup is.
21:53 semiosis schism: how comfortable are you with building the binary debs?  could I just give you the source package files?  or do you need me to build binaries?
21:53 schism i can probably build them.
21:53 semiosis heh
21:53 schism is there a magical way to pull an installed version back to deb that I've never heard of?
21:54 semiosis i dont understand your question
21:54 LebedevRI joined #gluster
21:55 schism we've got it installed, but some one/thing cleaned teh package cache. so i can at least compare what I build.
21:56 semiosis doubtful
21:57 semiosis you're lucky that i just did this yesterday for someone who was politely bugging me for weeks :)
21:58 badone_ joined #gluster
21:58 semiosis building binary debs now.  if this goes well, should have a link for you in about 5 minutes
21:58 semiosis you really caught me just before i left for the weekend
21:59 semiosis kkeithley: remind me next week to send you my fancy deb building vagrant machine
22:00 semiosis ...which i may even put on github (minus the private keys) over the weekend
22:00 hchiramm_ joined #gluster
22:01 semiosis schism: pm?
22:03 schism thanks for this.
22:08 semiosis JoeJulian: prepare to LYAO... i just ran out of space... ON DOWNLOAD.GLUSTER.ORG
22:08 semiosis someone should rotate some logs or something :)
22:10 JoeJulian hehehe
22:11 JoeJulian It's probably brick logs. ;)
22:23 semiosis schism: https://www.dropbox.com/s/xhtjo8wtmmhb450/glusterfs-3.2.7-precise1.tar.gz?dl=0
22:23 semiosis aaaand i'm outta here
22:41 T3 joined #gluster
22:57 hchiramm_ joined #gluster
23:26 capri joined #gluster
23:33 Pupeno_ joined #gluster
23:35 hchiramm_ joined #gluster
23:44 T3 joined #gluster
23:50 schism thank you!!!!

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary