Camelia, the Perl 6 bug

IRC log for #gluster, 2013-11-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:37 atrius` joined #gluster
00:48 kPb_in joined #gluster
01:29 asias joined #gluster
01:54 Guest54292 joined #gluster
02:12 Skaag joined #gluster
02:32 asias joined #gluster
02:34 kPb_in joined #gluster
02:47 kPb_in_ joined #gluster
02:47 P0w3r3d joined #gluster
03:47 asias_ joined #gluster
04:51 bulde joined #gluster
04:54 saurabh joined #gluster
04:55 psharma joined #gluster
05:02 bulde joined #gluster
05:48 bulde joined #gluster
05:51 ngoswami joined #gluster
05:55 bulde joined #gluster
06:04 nshaikh joined #gluster
06:27 DV__ joined #gluster
06:28 harish_ joined #gluster
06:30 ricky-ticky joined #gluster
06:39 DV__ joined #gluster
06:39 vimal joined #gluster
06:58 DV__ joined #gluster
07:06 nasso joined #gluster
07:06 Fresleven joined #gluster
07:47 ekuric joined #gluster
07:56 Fresleven_ joined #gluster
08:02 Fresleven_sysadm joined #gluster
08:06 bulde joined #gluster
08:09 ctria joined #gluster
08:09 harish_ joined #gluster
08:11 vpshastry joined #gluster
08:23 Fresleven_ joined #gluster
08:49 T0aD- joined #gluster
08:51 ThatGraemeGuy_ joined #gluster
08:52 foster_ joined #gluster
08:52 bulde joined #gluster
08:52 vpagan_ joined #gluster
08:53 ke4qqq_ joined #gluster
08:53 efries joined #gluster
08:53 hagarth1 joined #gluster
08:54 rubbs_ joined #gluster
08:54 Kodiak joined #gluster
08:57 Nuxr0 joined #gluster
08:58 23LAAFKM7 joined #gluster
08:59 eightyeight joined #gluster
09:00 sticky_afk joined #gluster
09:00 stickyboy joined #gluster
09:01 kevein joined #gluster
09:03 klaxa joined #gluster
09:05 PatNarciso joined #gluster
09:30 vpshastry joined #gluster
09:46 raar joined #gluster
09:53 pkoro joined #gluster
10:22 bulde joined #gluster
10:23 ababu joined #gluster
11:03 vpshastry left #gluster
11:09 rwheeler joined #gluster
11:26 P0w3r3d joined #gluster
11:31 T0aD joined #gluster
11:48 bulde joined #gluster
11:58 Kodiak left #gluster
12:00 KodiakF joined #gluster
12:01 marbu joined #gluster
12:05 rcheleguini joined #gluster
12:07 mbukatov joined #gluster
12:26 JB__ joined #gluster
12:32 NuxRo joined #gluster
13:16 P0w3r3d joined #gluster
13:17 bet_ joined #gluster
13:17 davidbierce joined #gluster
13:24 lpabon joined #gluster
13:29 vpshastry joined #gluster
13:36 edward2 joined #gluster
13:37 vpshastry left #gluster
13:39 ira joined #gluster
13:47 marbu joined #gluster
13:54 kaptk2 joined #gluster
14:08 harish_ joined #gluster
14:09 partner semiosis: np. thanks for the debian files put available, i was able to build versions i needed. that's when i noticed the log dir issue
14:24 bugs_ joined #gluster
14:47 plarsen joined #gluster
14:51 vpshastry joined #gluster
14:51 vpshastry left #gluster
15:00 asias_ joined #gluster
15:01 LoudNoises joined #gluster
15:13 ndk joined #gluster
15:29 verdurin joined #gluster
15:41 bulde joined #gluster
15:46 ThatGraemeGuy joined #gluster
15:47 failshell joined #gluster
15:56 hateya joined #gluster
16:07 vpshastry joined #gluster
16:10 vpshastry left #gluster
16:29 jkroon joined #gluster
16:33 jkroon hi guys, i've got a gluster volume (originally just had 1 brick) where the Type is Distribute, I'd now like to add a secondary brick to replicate the first but "gluster volume add-brick replica 2 10.10.101.230:/mnt/gl_ast_spool" gives me "Wrong brick type: 2, use <HOSTNAME>:<export-dir-abs-path>, according to peer status the Hostname of the peer is in fact just the IP.
16:35 jkroon glusterfs version 3.4.0
16:36 verdurin_ joined #gluster
16:37 dbruhn jkroon, you can't change the volume type after the fact
16:37 dbruhn at least I don't think you can
16:38 nhm joined #gluster
16:38 bulde joined #gluster
16:38 jkroon oh crap, that's a problem.  Ok, alternatively, how can I create a volume with a single brick into distribute-replicate mode?
16:38 Mo__ joined #gluster
16:39 dbruhn You need two bricks for a replica volume
16:39 jkroon that way I can create a new volume, move all the data over, destroy the old on and add the brick that came free into the new one.
16:39 elyograg adding replicas should be possible.  at least I had understood that to be the case.
16:40 dbruhn wouldn't it have needed to be set up with any replica in the first place?
16:40 jkroon elyograg, that's what I was told before, and that was the migration plan for the client, add new server, create gluster volume, move data onto it, trash the old partition and add it as new brick to replicate
16:40 elyograg jkroon: you forgot to give it the name of the volume.
16:41 elyograg (I so want to say "you forgot to hook up the doll.)
16:41 jkroon grr, elyograg you're right
16:41 elyograg http://www.mail-archive.com/gluste​r-users@gluster.org/msg11802.html
16:41 glusterbot <http://goo.gl/Nb6czv> (at www.mail-archive.com)
16:41 jkroon jay, ok, i've clearly slept too little this week
16:41 dbruhn hmm, interesting
16:41 jkroon well, it's taking a while to complete the command but it's not moaning :)
16:42 jkroon and Type changed, and number of bricks is now 1 x 2 = 2 :)
16:42 elyograg it has to copy all the data.
16:43 elyograg should be background, though.
16:43 jkroon it completed without copying the data
16:43 jkroon and no migration going yet (at least, df -h isn't showing data being added onto the brick)
16:43 jkroon i believe that's what rebalance is for, or I can just trigger a stat on all the files to trigger a self-heal
16:44 elyograg cool.  when I make changes to my volumes, it takes several seconds. Always just long enough that I'm worried it's going to fail.
16:45 jkroon ooh yes, running a du -sh /path/to/some/mountpoint/referencing/gluster triggered what was required.
16:45 jkroon this is going to take a LONG time.
16:47 elyograg if you do the du without parameters (or maybe just the -h) you'll see its progress.
16:47 rwheeler joined #gluster
16:47 elyograg won't make it go any faster, but you'll have some warm fuzzies. :)
16:48 jkroon hahaha, i think that's a bad idea considering that it's around 250G worth of mostly <1MB files.
16:48 jkroon very often <100KB
16:52 elyograg I added some storage to my volume and started a rebalance on Tuesday at 9:30 PM or so.  It's now almost 11 AM on Friday, and it's only migrated 1.2TB of the approximately 10TB that it must migrate.
16:53 Alex elyograg: I'm glad you've hit 1.2TB from 9000MB, at least! ;)
16:53 elyograg IT was 1000GB yesterday. :)
16:55 Alex ah, sorry, yes.
16:57 jkroon elyograg, jap, this seems to be a really slow progress thing.
16:57 jkroon done 400MB so far ...
16:57 jkroon but now I'm headed home, finally
17:08 ira joined #gluster
17:08 ira joined #gluster
17:22 ngoswami joined #gluster
17:25 zaitcev joined #gluster
17:32 JB__ joined #gluster
17:32 JB__ left #gluster
17:44 rotbeard joined #gluster
17:56 TP_ joined #gluster
18:00 TP_ Hey Gang! Does any know of the best way to determine if a volume is distributed replicated versus just replicated?
18:02 lpabon joined #gluster
18:05 Cenbe joined #gluster
18:14 elyograg TP_: gluster volume info [VOLUMENAME]
18:20 KodiakF joined #gluster
18:22 TP_ Thanks  elyograg
18:28 KodiakF joined #gluster
18:28 rwheeler joined #gluster
18:29 TheSov joined #gluster
18:30 TheSov Hello! i wanted to setup a HA NFS solution and i was wondering if gluster would be the software to use to keep everything in sync?
18:33 dbruhn TheSov, you are probably going to have to provide more info about what exactly you are trying to accomplish. Otherwise the answer is just going to be "sure"
18:36 TheSov well i have a vmware cluster, and most of the guests use iscsi to access their disks on a commercial san, however the vmdk bootable volumes for the vmware guests lie on a NFS server, I need to make this server Highly available in case of a failure. I was wondering If i would use gluster or drdb for something like this
18:38 dbruhn So are you expecting automatic failover?
18:40 TheSov not at this time, i would like to do that
18:41 spechal_ joined #gluster
18:41 TheSov perhaps a cluster ip
18:41 dbruhn gluster has a built in NFS server, and if you create a replication volume will do what you expect it to do
18:42 spechal_ I followed the outline of http://www.gluster.org/community/d​ocumentation/index.php/QuickStart and I see the volumes listed when I issue bluster volume info on both boxes, but when I mount server1 and add data, it never shows up on server2 ... anyone have any ideas on how to troubleshoot it?
18:42 glusterbot <http://goo.gl/OEzZn> (at www.gluster.org)
18:42 dbruhn how are you mounting server1?
18:45 TheSov so gluster has a nfs service, does it also have a cluster ip solution or do i have to go with something else?
18:45 dbruhn you need your own clustered IP service
18:46 TheSov ahhh thank you so very much!
18:46 dbruhn the gluster FUSE client is typically better for shared use
18:46 dbruhn but if you only have NFS as option
18:46 dbruhn you want to make sure to use the gluster NFS service
18:49 semiosis spechal_: please ,,(pasteinfo) and include client log file
18:49 glusterbot spechal_: Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
18:49 spechal_ http://fpaste.org/51092/33317741/
18:49 glusterbot Title: #51092 Fedora Project Pastebin (at fpaste.org)
18:50 spechal_ There is no log file from what I can tell, at least not in /etc/glusterfs like the doc says
18:50 KodiakF joined #gluster
18:50 dbruhn logs are at /var/log/glusterd
18:50 dbruhn or glsuterfs
19:00 DV joined #gluster
19:08 KodiakF newb hypothetical question here - if I have this in fstab:  "gluster01.domain:/some-vol   /mnt/gluster   glusterfs   defaults,_netdev   0 0" , and gluster01.domain goes down but gluster02 & gluster03 are up, does the host hang on boot since it can't mount from gluster01.domain?
19:08 KodiakF or does it know to look for other gluster hosts?
19:09 dbruhn it only get's the other servers after it connects, so it would hang, if I remember right
19:09 jmeeuwen i've done some tests with a round-robin balanced DNS name... it seems the client might get stuck for a little while unless you tweak it
19:09 dbruhn you can always take a host down and point it at it to test it
19:10 jmeeuwen iirc, the timeout was about 10-15 seconds
19:15 spechal_ quick question ... if I mount server1 using glusterfs client, if server1 goes down, how does the server with the mount to server1 get the data to server2?  I am looking for failover, but from what I have read gluster has that built in.  I am just trying to understand it
19:16 dbruhn the gluster fuse client actually connects to both servers and writes to both servers at the same time
19:16 dbruhn the client becomes aware of all of the servers in the cluster on initial connection to the first server
19:16 spechal_ so as long as server1 is online during connection, the mount is really to server1 and server2 so server1 can go away after that?
19:17 dbruhn yep
19:17 spechal_ awesome sauce ... no need for an F5 pool or HA proxy
19:17 dbruhn nope
19:21 dbruhn ... I hate split-brain errors... just in case anyone likes them
19:24 Xavier33 joined #gluster
19:31 JoeJulian hehe
19:37 dbruhn my application provider had an error that every time it processed a file that had an i/o error it would crash
19:38 dbruhn and I have been chasing thousands upon thousands of the damn things all week
19:40 JoeJulian dbruhn: How are you resolving them?
19:46 dbruhn Manually for the most part
19:47 dbruhn built some loop stuff for a delete of some data I am working on
19:54 JoeJulian dbruhn: I was just going to suggest, if you're able to just blindly declare one brick as sane then you could use the afr.favorite-child volume option to just pick which one you'll use in the event of split-brain.
19:58 KodiakF left #gluster
20:00 dbruhn the problem I am having is the stuff is spread over several bricks, and I have some stuff that is showing up on like three bricks, or four brick... which is even more confusing
20:14 SpeeR joined #gluster
20:16 wushudoin joined #gluster
20:40 _Bryan_ joined #gluster
20:41 RedShift joined #gluster
20:46 delhage joined #gluster
20:46 delhage joined #gluster
21:33 kPb_in_ joined #gluster
21:37 failshel_ joined #gluster
21:57 SpeeR has anyone had issues with xfs, kernel panic with a moderate load?
22:05 Rav__ joined #gluster
22:07 Rav__ how do I logrotate /var/log/gluster.log ?
22:15 Cenbe joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary