Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 _polto_ joined #gluster
00:17 plarsen joined #gluster
00:24 kiwnix joined #gluster
00:41 wkf joined #gluster
00:43 mmance joined #gluster
00:56 theron joined #gluster
01:04 doo joined #gluster
01:05 kiwnix joined #gluster
01:06 wkf joined #gluster
01:11 ttkg joined #gluster
01:13 doo__ joined #gluster
01:33 mmance seem to run into a lot of errors setting this thing up
01:33 mmance is not in 'Peer in Cluster' state
01:33 mmance have run peer probe for each server from each server
01:34 mmance and it still says that
01:35 mmance sry,
01:35 mmance it was just me not typing things correctly
01:45 harish joined #gluster
01:46 doo joined #gluster
01:46 diegows joined #gluster
02:03 T3 joined #gluster
02:31 nangthang joined #gluster
02:33 JustinClift joined #gluster
02:41 mmance wow, I must say glusterfs is pretty freaking awesome
02:42 mmance I am able to pust over 220mb/s to a bunch of cheap 80g sata drives
02:48 plarsen joined #gluster
02:51 rcampbel3 joined #gluster
02:51 JoeJulian Nice
02:52 JoeJulian mmance: Can you tell anything more about what this is being used for?
02:54 mmance sure
02:54 mmance I am doing video capture with BlackMagic Intensity Pros
02:56 mmance Capturing a 1080p 30fps streams at approx 110mb/s each on two computers to the glusterFS
02:56 JoeJulian Live gig recording, studio recording, hollywood?
02:56 mmance right now I have 5 servers with 2x 80gb 7200rpm in raid1 with mdadm
02:57 mmance its for youtube
02:57 mmance http://youtu.be/72gS61xfJwY
02:57 mmance Thats me, I made that last week.
02:57 mmance A lot of the equipment I am using is sorta crappy, but I used it because of where it came from.
02:57 mmance homeland security
02:57 mmance lol
02:58 JoeJulian hehe
02:58 mmance I wanted to be able to say that I started my studio on homeland security hardware
02:58 mmance I am a electronics recycler by trade.
02:58 JoeJulian That's kinda cool.
02:58 mmance I have literally milk crates full of 80gb drives
02:58 mmance so thats what made me turn to gluster
02:59 PinkFreud JoeJulian: so, I'm trying to see if I can save the data on this cluster.
02:59 mmance allthough I might save on a ton of power if I switched to a few machines with ssd, this is a whole lot more fun
02:59 jmarley joined #gluster
03:00 PinkFreud JoeJulian: I've brought the (slightly smaller) 1TB brick down.  You might remember that it's a replication partner for another 1TB node that has a few MB more space.
03:00 JoeJulian yes
03:01 PinkFreud now I want to migrate the 1TB of data from the remaining brick to the 4TB pair.
03:02 PinkFreud how can I accomplish that?  gluster complains about only dealing with one brick.
03:02 JoeJulian Is that vm live?
03:02 PinkFreud vm is up, but I've taken down glusterd and glusterfsd.
03:02 PinkFreud er, on the smaller brick.
03:03 JoeJulian There's plenty of room on the 4TB drives?
03:03 PinkFreud yes.
03:03 PinkFreud 4tb bricks have 1.8TB fre.
03:04 PinkFreud the goal right now is just to move the data to the 4TB nodes, then I'll build a new striped cluster next week.
03:04 JoeJulian I would "gluster volume remove-brick $smaller_brick_1 $smaller_brick_2 start" and let it migrate them to the bigger disks.
03:04 PinkFreud my problem with that is that brick1 has slightly less data than brick2.
03:05 JoeJulian Should be ok.
03:05 PinkFreud ah, ok.
03:05 JoeJulian They're in replica, so the xattrs will show that one's not up to date.
03:05 PinkFreud so i need to bring brick1 back online?
03:05 PinkFreud or can i force that operation with brick1 down?
03:05 JoeJulian I would try without doing that first.
03:05 PinkFreud 'that'?
03:06 PinkFreud without bringing it back up?
03:06 JoeJulian right
03:06 PinkFreud ok.
03:06 PinkFreud volume remove-brick start: success
03:06 PinkFreud hah!
03:06 PinkFreud JoeJulian: thank you.  :)
03:07 JoeJulian Use "status" to see when it's done.
03:07 JoeJulian Once it's complete, "commit"
03:07 * PinkFreud nods
03:08 PinkFreud will the volume continue to work in the meantime?
03:08 JoeJulian That's what the documentation says.
03:09 JoeJulian If I'm wrong, I'll refund you everything you paid me.
03:09 PinkFreud ok.  I'll be looking forward to that $0.00 check in the mail, in that case.
03:09 PinkFreud :)
03:12 bala joined #gluster
03:13 PinkFreud JoeJulian: anyway, thank you again.  :)
03:15 ildefonso joined #gluster
03:26 PinkFreud hmm.  doesn't look like it migrated the data over.
03:27 PinkFreud i suppose i can copy it from the mounted lv on brick2 back into the cluster, though
03:27 PinkFreud oh, right.  commit.
03:28 PinkFreud yeah, still didn't copy the data over.
03:28 PinkFreud ok, manual copy it is.
03:56 mmance joined #gluster
03:56 johnbot Potentially stupid question. Did the fstab syntax change for gluster 3.6.2 from 3.5? I can easily mount my 3.6.2 gluster volume from any 3.6.2 gluster client with 'mount  172.31.47.154:gv1 /storage' etc. but if i do a mount -a it errors with 'Mount failed. Please check the log file for more details.'. The related fstab line is '172.31.47.154:gv1 /storage glusterfs  defaults 1 2'
03:57 mmance ok, I have 5 clients using the gluster, but I keep getting one or two kicked with Transport endpoint is not connected
03:58 mmance I have tried restarting the service, killing the service, forcing the umount, and  using fusemount -u to no avail
03:58 badone__ joined #gluster
03:58 mmance I am pretty sure I am saturating my whole pipe causing network interruptions.
03:59 mmance The only way I have been able to recover the client is a reboot so far
03:59 mmance thats such a Microsoft answer, I don't like it
04:02 mmance now 3 of the 5 clients have the error
04:03 mmance I am using it much differently now.  I am reading 2 very large files and I am writing thousands of 3mb files from 5 clients
04:06 mmance I am mounted fuse, should I switch to nfs?
04:13 dgandhi joined #gluster
04:18 mmance please redact that question, I am using gluster native client
04:23 johnbot I just added the mount command to rc.local for now which works fine, may be some problem related to ubuntu 14.04 combined with gluster 3.6.2
04:32 mator joined #gluster
04:36 hagarth joined #gluster
04:52 rcampbel3 joined #gluster
04:59 badone__ joined #gluster
05:04 shubhendu joined #gluster
05:07 coredump joined #gluster
05:16 Folken_ joined #gluster
05:17 badone__ joined #gluster
05:21 rcampbel3 joined #gluster
05:28 harish joined #gluster
05:44 dusmant joined #gluster
06:56 badone__ joined #gluster
06:57 hagarth joined #gluster
07:09 andreask joined #gluster
07:15 T0aD joined #gluster
07:17 Slasheri_ joined #gluster
07:17 tberchenbriter_ joined #gluster
07:18 sac`away` joined #gluster
07:23 churnd- joined #gluster
07:24 m0zes_ joined #gluster
07:29 tru_tru joined #gluster
07:32 mmance are many small files still a issue with gluster in terms of performance?
07:33 mmance It seems that when multiple clients ask for the same file at the same time I get those transport endpoint not connected
07:33 mmance I ended up scripting my render so that each client only works on their own files.
07:33 tom[] joined #gluster
07:34 mmance and it worked beautifully, but I had a bit of overlap, and when they asked for the same files, one got permission denied and the other got endpoint is not connected
07:35 kbyrne joined #gluster
07:35 [o__o] joined #gluster
07:37 kbyrne joined #gluster
07:44 ekuric joined #gluster
07:46 mattmcc joined #gluster
07:49 madebymarkca joined #gluster
07:56 andreask joined #gluster
08:09 ghenry joined #gluster
08:20 johnnytran joined #gluster
08:32 _polto_ joined #gluster
09:05 kovshenin joined #gluster
09:25 rcampbel3 joined #gluster
09:28 hagarth joined #gluster
09:42 social joined #gluster
09:48 Philambdo joined #gluster
09:54 dusmant joined #gluster
10:09 tanuck joined #gluster
10:12 side_control joined #gluster
10:18 _polto_ joined #gluster
10:25 rjoseph|afk joined #gluster
10:25 shaunm joined #gluster
10:46 LebedevRI joined #gluster
10:51 kovshenin joined #gluster
10:53 _polto_ joined #gluster
11:33 jiku joined #gluster
11:49 rcampbel3 joined #gluster
12:47 partner johnbot: there's been some mount issues along the way with many versions and debian/ubuntu, can't confirm with the given versions thought but the reason was trying to mount a network filesystem before all the required pieces were loaded prior the attempt
13:38 rcampbel3 joined #gluster
13:56 wushudoin joined #gluster
14:12 chirino joined #gluster
14:19 anoopcs joined #gluster
14:52 pedrocr joined #gluster
14:54 pedrocr could someone point me towards the documentation on how glusterfs handles conflicts?
14:55 pedrocr I want to try it out with my use case but am struggling to figure out how it will handle disconnects
14:56 churnd joined #gluster
14:58 TvL2386 joined #gluster
15:10 bennyturns joined #gluster
15:27 mikedep333 joined #gluster
15:27 rcampbel3 joined #gluster
15:36 plarsen joined #gluster
15:43 sprachgenerator joined #gluster
15:59 DV joined #gluster
16:09 shaunm joined #gluster
16:12 shaunm joined #gluster
16:36 _polto_ joined #gluster
16:44 wkf joined #gluster
16:58 rcampbel3 joined #gluster
17:05 soumya joined #gluster
17:20 mmance for anyone interested, I fixed my  transport endpoint not connected client by umount -l, forcing it wouldn't work
17:20 mmance I wish the client was more resilient.  Network congestion kicks my clients and hangs them until I can umount and remount the volume
17:21 mmance is NFS any better then this than the fuse gluster client?
17:34 elico joined #gluster
17:45 kovshenin joined #gluster
18:00 gregor3005 joined #gluster
18:01 gregor3005 hi, yesterday i found gluster, played a little bit. works really awesome. no i test geo-replication. is it really needed to ssh between the locations as root user?
18:04 gregor3005 and whats the best way for qos and geo-replication?
18:22 gregor3005 and i have the problem that my ssh port is not on default tcp/22 as like somebody writes in stackoverflow: http://stackoverflow.com/questions/27525456/glusterfs-geo-replication-on-non-standard-ssh-port
18:23 cyberbootje joined #gluster
18:38 cyberbootje joined #gluster
19:00 gregor3005 lol i removed gluster from testmachine and reinstalled it again and i can't start it because of many errors (centos 6, latest gluster)
19:07 gregor3005 i cleared all files under /var/*, now i can reinstall it
19:34 gregor3005 i have the problems because of enabled selinux, but i found everywhere i have to disable it. not a good idea
19:54 rotbeard joined #gluster
20:12 _polto_ joined #gluster
20:17 theron joined #gluster
20:38 gregor3005 i found in the logs during creating a geo-replication that i have to open port tcp/24007. this informations are encrypted? i thought all would be transfered via ssh?
20:43 gregor3005 how should a volume on the slave side look? i get "...vol1_clone is not a valid slave volume..."
20:55 MacWinner joined #gluster
20:56 badone__ joined #gluster
21:02 gregor3005 is there anywhere a minimal howto how to create a geo-replication? how to create the slave side is nowhere documented. i get all the time "is not a valid slave volum"
21:04 gregor3005 for example it is nowhere documented that tcp/24007 should be opened. everywhere i found only the information that i have to open ssh, also here i can't change the default port :-(
21:28 badone__ joined #gluster
22:07 vincent_vdk joined #gluster
22:18 badone__ joined #gluster
22:32 badone__ joined #gluster
22:34 DV joined #gluster
22:45 elico joined #gluster
23:01 shaunm joined #gluster
23:40 pedrocr joined #gluster
23:53 stickyboy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary