Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:49 Guest67977 joined #gluster
01:16 diegol__ joined #gluster
01:17 andrewklau joined #gluster
01:27 asias_ joined #gluster
01:31 davidbierce joined #gluster
01:34 daMaestro joined #gluster
01:46 _BryanHm_ joined #gluster
02:00 bala joined #gluster
02:02 raghug joined #gluster
02:06 kevein joined #gluster
02:12 harish joined #gluster
02:34 zwu joined #gluster
02:36 RameshN joined #gluster
02:46 _ilbot joined #gluster
02:46 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:05 nasso joined #gluster
03:12 raghug joined #gluster
03:16 bharata-rao joined #gluster
03:19 kshlm joined #gluster
03:19 tg2 joined #gluster
03:30 sgowda joined #gluster
03:55 itisravi joined #gluster
04:00 shylesh joined #gluster
04:10 shruti joined #gluster
04:13 gdubreui joined #gluster
04:17 kanagaraj joined #gluster
04:18 sgowda joined #gluster
04:22 shubhendu joined #gluster
04:28 andrewklau joined #gluster
04:32 DV__ joined #gluster
04:35 andrewklau I have an openstack the setup with 2 compute nodes also doubling as gluster nodes, is there anyway I can get the nodes to mount themselves right after the glusterd service starts and before the openstack components do?
04:36 spandit joined #gluster
04:42 _pol joined #gluster
04:43 ppai joined #gluster
04:53 MiteshShah joined #gluster
05:18 bala joined #gluster
05:19 raghu joined #gluster
05:19 RameshN joined #gluster
05:31 bala joined #gluster
05:32 satheesh joined #gluster
05:32 bulde joined #gluster
05:34 satheesh joined #gluster
05:48 ndarshan joined #gluster
05:51 lalatenduM joined #gluster
05:52 psharma joined #gluster
05:54 gdubreui Hello, Is availability date (approximately) of gluster 3.5 known?
05:54 nshaikh joined #gluster
05:55 gdubreui never mind just found the planning :)
05:55 vpshastry joined #gluster
06:00 mohankumar__ joined #gluster
06:01 mohankumar joined #gluster
06:09 vimal joined #gluster
06:11 saurabh joined #gluster
06:17 CheRi joined #gluster
06:18 ngoswami joined #gluster
06:25 shri joined #gluster
06:26 Shri1 joined #gluster
06:36 spandit joined #gluster
06:39 vshankar joined #gluster
06:41 satheesh2 joined #gluster
06:49 ngoswami joined #gluster
07:05 ThatGraemeGuy joined #gluster
07:15 ThatGrae- joined #gluster
07:20 jtux joined #gluster
07:26 shri joined #gluster
07:44 vshankar joined #gluster
07:52 sgowda joined #gluster
07:53 nullck joined #gluster
07:54 hateya joined #gluster
07:58 shri joined #gluster
08:01 shri_ joined #gluster
08:07 ctria joined #gluster
08:07 ricky-ticky joined #gluster
08:13 getup- joined #gluster
08:18 mgebbe joined #gluster
08:20 shri_ hagarth: ping.. you there
08:21 shri_ hagarth: I added qemu_allowed_storage_drivers = [gluster] & source_ports = ['24007']  in nova.conf
08:21 shri_ hagarth: but openstack still uses mounted Glusterfs
08:22 hagarth shri_: hmm, that should not be the case. Let me check if any other tweaks are needed.
08:22 shri_ hagarth: also
08:23 shri_ CINDER_DRIVER=glusterfs & volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
08:23 mgebbe_ joined #gluster
08:23 shri_ this is set
08:24 shri_ hagarth: in my case, by default openstack operations uses mounted Glusterfs and NOT using libgfapi :(
08:31 _pol_ joined #gluster
08:40 lalatenduM joined #gluster
08:48 hybrid512 joined #gluster
08:49 mgebbe joined #gluster
08:57 bp_ joined #gluster
08:59 bp_ Out of curiosity: is there a way to see if two network endpoints (say, gluster1.example.org:4000:gv0 and gluster19.example.org:14059:gv0) actually point to the same gluster volume?
08:59 shri joined #gluster
09:03 hagarth shri: is there anything related to libgfapi in nova logs? i will bbiab
09:04 diegol__ joined #gluster
09:06 satheesh joined #gluster
09:07 shri hagarth: I checked in /opt/stack/data/logs & /opt/stack/data/nova directory but I could not found any thing useful
09:07 shri hagarth: Is there any place  where I can check nova logs ?
09:08 calum_ joined #gluster
09:09 harish_ joined #gluster
09:14 muhh joined #gluster
09:21 raar joined #gluster
09:23 sashko left #gluster
09:30 ngoswami joined #gluster
09:37 raghug bfoster: ping
09:42 spandit joined #gluster
09:44 glusterbot New news from newglusterbugs: [Bug 1031973] mount.glusterfs exits with code 0 even after failure. <http://goo.gl/uA6FGM>
09:46 hagarth joined #gluster
09:55 warci joined #gluster
09:55 warci hello all, i've got a changeset stuck in the "applying" phase, any way to get it unstuck?
09:56 warci oops sorry, wrong channel :)
10:01 getup- joined #gluster
10:04 shri hagarth: I checked few more logs
10:04 shri ./screen-n-cond.2013-11-19-122020.log:2013-11-19 file contain some info for IP/vole etc etc
10:04 shri --
10:04 shri screen-n-cond.2013-11-19-122020.log:2013-11-19 12:30:03.032 9162 DEBUG qpid.messaging [-] RACK[550d518]: Message({'oslo.message': '{"_unique_id": "50945b8bb0de45a3b30aa48b9bfc476c", "failure": null, "result": [{"instance_uuid": "d3ff8f49-7c89-4687-b0cf-cf577da61fd2", "virtual_name": null, "no_device": null, "connection_info": "{\\"driver_volume_type\\": \\"glusterfs\\", \\"serial\\": \\"49a67549-da80-4b54-b4dc-d4af287dc869\\", \\"data\\": {\\"name\\": \\"
10:04 shri volume-49a67549-da80-4b54-b4dc-d4af287dc869\\", \\"format\\": \\"raw\\", \\"qos_spec\\": null, \\"export\\": \\"9.122.119.114:/vol1\\", \\"access_mode\\": \\"rw\\", \\"options\\": null}}", "created_at": "2013-11-19T06:59:35.750405", "snapshot_id": null, "updated_at": "2013-11-19T06:59:42.928058", "device_name": "vda", "deleted": 0, "volume_size": null, "volume_id": "49a67549-da80-4b54-b4dc-d4af287dc869", "id": 1, "deleted_at": null, "delete_on_termination":
10:05 shri false}], "_msg_id": "6208b5b22cb2449da0ac16c7c4deaad8"}', 'oslo.version': '2.0'}) msg_acked /usr/lib/python2.7/site-package​s/qpid/messaging/driver.py:1263
10:05 shri --
10:05 shri hagarth: is this useful ?
10:05 hagarth shri: checking that
10:06 rastar joined #gluster
10:06 hagarth shri: is this your gluster URI - 9.122.119.114:/vol1 ?
10:06 shri hagarth: Yes
10:06 shri my gluster volume info
10:07 hagarth shri: have you set allow-insecure to on on vol1 ?
10:07 hagarth and restarted glusterd after editing its configuration file?
10:07 shri Volume Name: vol1
10:07 shri Type: Distribute
10:07 shri Volume ID: 31c0eef3-0dfb-43b5-a511-eed18eb49e61
10:07 shri Status: Started
10:07 shri Number of Bricks: 1
10:07 shri Transport-type: tcp
10:07 shri Bricks:
10:07 shri Brick1: 9.122.119.114:/dir1
10:07 shri hagarth: nope :(
10:07 satheesh joined #gluster
10:08 hagarth shri: ah, we need to do 2 more configuration changes
10:08 shri hagarth: is that mandatory
10:08 hagarth shri: yes, without enabling them it won't work
10:08 hagarth these are the changes:
10:08 hagarth 1. gluster  vol set vol1 server.allow-insecure on
10:09 hagarth 2. Edit /etc/glusterfs/glusterd.vol and adding the following line
10:09 hagarth option rpc-auth-allow-insecure on
10:09 hagarth after doing that, we would need to re-start glusterd
10:11 hagarth shri: I think we need a howto for this. My deployment was using packstack, you mentioned that you are using devstack on Fedora right?
10:11 shri hagarth: Yeah.. I'm trying devstack + gluster on F19
10:11 shri hagarth: I really faced many problems :)
10:12 hagarth shri: ok, I will also try devstack here
10:12 shri hagarth: ok one small issue
10:12 hagarth shri: yeah
10:12 shri on my setup I have installed gluster through GIT.. configure..make / make install
10:12 raghug joined #gluster
10:12 shri hagarth: I could not see /etc/glusterfs/glusterd.vol file
10:14 hagarth shri: you can edit /usr/etc ... if you have installed from git
10:14 sohoo joined #gluster
10:14 ngoswami joined #gluster
10:15 shri hagarth: unfortunately /usr/etc directory is empty :(
10:16 sohoo hi all, we have a big issue with 1 node(out of 6) its loading (processors works on 100% with no IO) for almost 12 hours and we dont know when it will stop this.. causing the entire cluster to be unresponstive we had to put iptable rules but we dont know what gluster is doing so long on that server
10:17 shri hagarth: got it - /usr/local/etc/glusterfs/glusterd.vol.. let me try now
10:18 hagarth shri: oops, i meant that
10:18 warci hello guys, is ext4 usable yet on gluster (we're on rhel 6.4)
10:18 shri hagarth: np.. I will try above steps .. let's see
10:19 warci i don't want to use XFS as we need to pay extra for that and we mostly have small files anyway
10:19 shri hagarth: Thanks for helping out :)
10:19 raghug joined #gluster
10:20 hagarth warci: if you are using 3.4, ext4 is usable
10:20 sgowda joined #gluster
10:20 hagarth shri: hopefully it works for you!
10:21 warci ok, thanks hagarth!
10:21 hagarth sohoo: which gluster processes consume 100%? are these the bricks?
10:22 sohoo we use EXT4 on 3.3 and centos 6.2
10:22 hagarth sohoo: are you using 3.3.2 ?
10:22 sohoo hagarth: yes mostly 2 bricks out of 6
10:23 sohoo no 3.3.1
10:23 sohoo 2 bricks take 100% cpu for 12 hours, we dont know what its doint or when it will finish that.
10:24 hagarth sohoo: I don't think 3.3.1 has the ext4 fix, wondering if that might be causing any readdir loops on those bricks
10:24 sohoo how can i check that? the server with the issue is working with EXT3
10:26 sohoo 3.3.2 had the fix?
10:26 hagarth sohoo: yes, 3.3.2 has the fix
10:26 hagarth sohoo: would it be possible to strace the glusterfsd processes and see what they are doing?
10:27 sohoo hagarth: can i upgrade only the storage nodes(not clients)?
10:27 bulde joined #gluster
10:27 hagarth sohoo: actually the fix is in dht translator which is part of the client stack
10:27 social joined #gluster
10:28 sohoo ops, so can i upgrade just the clients?
10:29 hagarth sohoo: yes you can, but can you check if it is the readdir problem or if there's a different problem that needs to be sorted out?
10:30 sohoo nagarth: the server that loads for 12 hours is on an old replica set that work with EXT3
10:30 asias_ joined #gluster
10:31 hagarth sohoo: by any chance did you recently upgrade the kernel on these servers?
10:31 shri hagarth: Many Thanks.. trying configuration fingure cross ! :)
10:33 andreask joined #gluster
10:33 sohoo Hagarth: no actualy it was meant to be but no, what happen is that we took 1 server offline to upragde the OS there peer probe it etc.. from the remaining server(the one that load now) then we stoped it because all was loading and we decided to abort it for now, so we put the old disks from the updarded server back and started it hopefully it will continue as befor and it did but now his pair
10:33 sohoo started to load untill now
10:35 ThatGraemeGuy joined #gluster
10:35 sohoo i mean we peer probe server A(the one that load) from server B the upgraded one.. then powered it off(server B) and put the disks with the old OS back
10:37 hagarth sohoo: can you check what the brick processes are doing with strace?
10:37 sohoo hagrath: ill try, the load is too high to check i think but ill try
10:39 bulde1 joined #gluster
10:40 hagarth sohoo: ok
10:40 PM1976 joined #gluster
10:40 PM1976 Hi All
10:41 hagarth PM1976: hello there
10:41 PM1976 can anyone here tell me why in geo-replication, when modify the sync-jobs, I don't see any difference
10:41 PM1976 I set it to 4 but still have only 1 file sent at a time
10:41 PM1976 :P
10:42 sohoo hagarth, he is doing lots of futex and readv
10:44 hagarth sohoo: do you know if readv is happening from the network socket descriptor or a file descriptor that refers to a file on the underlying file system?
10:45 hagarth sohoo: would it be possible to fpaste the strace output somewhere?
10:45 ThatGraemeGuy joined #gluster
10:45 sohoo hagrath: let me try
10:46 hagarth PM1976: geo-replication involves crawl, followed by sync. the sync-jobs just increases the number of sync threads. in this case, the crawl seems to be happening slowly and hence increase in the number of sync-jobs is not showing any effect.
10:50 sohoo epoll_wait(3, {{EPOLLIN, {u32=13, u64=60129542157}}, {EPOLLIN, {u32=18, u64=64424509458}}, {EPOLLIN, {u32=8, u64=17179869192}}}, 258, 4294967295) = 3
10:50 sohoo readv(13, [{"\200\0\0X", 4}], 1)        = 4
10:50 sohoo readv(13, [{"\0\0\n\262\0\0\0\0", 8}], 1) = 8
10:50 sohoo readv(13, [{"\0\0\0\2\0\23\320\5\0\0\1J\0​\0\0\16\0\5\363\227\0\0\0\34", 24}], 1) = 24
10:50 sohoo readv(13, [{"\0\0\r\342\0\0\0\0\0\0\0\0\0\0\0\0\​0\0\0\10\0\0\0\0\0\0\0\0\0\0\0\0"..., 56}], 1) = 56
10:50 sohoo futex(0x12e38e10, FUTEX_WAKE_PRIVATE, 1) = 0
10:50 sohoo readv(18, [{"\200\0\0@", 4}], 1)        = 4
10:50 sohoo joined #gluster
10:51 sohoo hagarth: does that make sense?
10:56 sohoo hagarth: i may add that restarting the volume, glusterd etc.. doesnt hep. it keeps geting back to load
11:02 hagarth sohoo: do you know if self-healing is happening?
11:03 Vaflya joined #gluster
11:03 Vaflya Hi guys; little question about geo-replication, as I was not able to find an answer elsewhere
11:04 Vaflya when you setup a vol to be replicated, after a few seconds it is displayed as OK, but how do you know the progress???
11:05 Vaflya looks like OK means that the "channel" is OK, but I did not find a way to know how much is done / remaining
11:05 Vaflya if you could help... thanks in advance
11:08 calum_ joined #gluster
11:09 Vaflya anyone there :) ?
11:09 Remco Depends on where there is
11:10 Vaflya mmh in the room
11:10 Vaflya @Remco did you see my question?
11:11 Vaflya about geo-rep
11:11 andrewklau left #gluster
11:12 Remco Tried a normal gluster volume status ?
11:12 * Remco never tried geo-rep
11:13 hagarth Vaflya: 3.5 will contain better support for monitoring geo-replication including tracking progress, gathering statistics etc.
11:13 PM1976 hagarth: thanks for this info. Is there a way to improve in that case?
11:14 PM1976 And what should eb the practical limit to set. Right now, I am at 4, but can we - for example - set it at 100?
11:14 Vaflya thanks
11:17 meghanam joined #gluster
11:17 meghanam_ joined #gluster
11:18 Vaflya PM1976 : what version are you using?
11:18 hagarth PM1976: the crawl is single threaded and can cause slowness when the disks/storage is slow.. 3.5 will again improve this behavior
11:19 sohoo its crazy.. how can you work in production like that. load 100 for 12 hours
11:19 rcheleguini joined #gluster
11:19 sohoo no IO, no network just CPU
11:21 Bhabah joined #gluster
11:21 PM1976 Vaflya: I am using the 3.4.1
11:22 PM1976 right now, our geo-replication is over a 1 Gpbs MPLS link between 2 datatcenter with a latency of 250ms (average)
11:23 PM1976 and I am getting something like 5-6 MB/sec (average) so about 40-50 Mpbs
11:23 Bhabah Hi, i have installed gluster and i can see that the peers are running and the volumes are started however when i create a document it does not get replicated, can anyone advise how i might be able to find out why its not replicating
11:23 PM1976 We are sending only large files (minimum 100s of MB)
11:24 Vaflya I also use MPLS, 200Mb/s link with 5ms latency but replicating only 300Gb of milder is a pain
11:24 Vaflya no more than 10Mb/s used
11:24 Vaflya Bhabah -> local or geo-replication?
11:25 dusmant joined #gluster
11:26 PM1976 Wow, you have only 10Mbps with 5ms latency
11:26 Bhabah local i guess, i followed the guide on the website and everything looks to be running and now firewalls in between the servers (same subnet) but files dont replicate.  do you knwo where the lgo files are stored?
11:26 PM1976 that's pretty low, imho
11:26 Vaflya pb is many many small files
11:26 Vaflya maildir!
11:27 Vaflya on the other volume for the DB guys, their dumps are sent at almost 100Mb/s
11:29 Vaflya Bhabah > what kind of volume?
11:29 Vaflya replicate, distributed replicate?
11:29 PM1976 ok
11:29 Bhabah tried with xfs and lvm...replicate
11:29 PM1976 I did with many small files (35 KB average) and it was real pain
11:30 Vaflya mine are like 1KB :D
11:30 Vaflya we used to replicate using ZFS exports for volumes with small files but no possible here
11:34 getup- joined #gluster
11:35 Vaflya Bhabah try the heal commands to see if some files are not written to the other node
11:36 hagarth Bhabah: are you per chance creating files in the bricks directly?
11:36 Bhabah i have ...i did also try using a client but weren't replicated either
11:38 hagarth Bhabah: files should be always created from clients
11:38 Vaflya yep, was doing some testing at the beginning just to see what happens and bluster does not like AT ALL messing with bricks :)
11:38 Vaflya did you try recreating the volume?
11:39 getup- joined #gluster
11:41 raghug joined #gluster
11:41 raghug bfoster: ping
11:48 PM1976 Thank you for the help. Let's wait for the 3.5 version :)
11:48 PM1976 Bye
11:52 kkeithley1 joined #gluster
11:52 verdurin joined #gluster
11:52 tjikkun_work joined #gluster
11:56 Bhabah ok looks to be working now..thanks
12:03 verdurin I followed the steps in https://bugzilla.redhat.com​/show_bug.cgi?id=865700#c9
12:03 glusterbot <http://goo.gl/J5Mzfh> (at bugzilla.redhat.com)
12:03 glusterbot Bug 865700: high, high, ---, rhs-bugs, ASSIGNED , "gluster volume sync" command not working as expected
12:03 verdurin because one node seemed to be out of sync, but now the second node won't start
12:05 verdurin The errors when I try to start the service are at: http://fpaste.org/55055/
12:05 glusterbot Title: #55055 Fedora Project Pastebin (at fpaste.org)
12:09 bma hey... one question
12:10 bma is it possible to create containers inside containers
12:10 bma like this
12:10 bma parent_container/son_container/object
12:10 bma ??
12:13 shri joined #gluster
12:13 Vaflya left #gluster
12:21 hagarth joined #gluster
12:25 bgpepi joined #gluster
12:28 RameshN joined #gluster
12:35 verdurin Actually it seems similar to the Fedora 19 problem reported at: https://bugzilla.redhat.co​m/show_bug.cgi?id=1009980
12:35 glusterbot <http://goo.gl/a8uHQ5> (at bugzilla.redhat.com)
12:35 glusterbot Bug 1009980: unspecified, unspecified, ---, ndevos, NEW , Glusterd won't start on Fedora19
12:35 verdurin except on CentOS 6.4
12:39 verdurin Just tried downgrading to 3.4.0-8 and that hasn't helped
12:40 CheRi joined #gluster
12:41 bma i look online and i found that, supposed, glusterFS have a REST api, but i dont found it
12:41 bma i found it for gluster-UFO
12:41 bma does exist it for GLusterFS?
12:44 DoctorWedgeworth joined #gluster
12:44 DoctorWedgeworth I've got a gluster mount showing every file twice. The filenames are the same, stats are identical, and they only show up once in the gluster export. Any idea?
12:45 kkeithley_ UFO is old. These days it's called glusterfs-openstack-swift and the devs are wrapping up getting it packaged in Fedora. Until then there's stuff on https://github.com/gluster/gluster-swift. I try to get the devs to hang out here but they're shy I guess.
12:46 verdurin I see errors like this:
12:46 verdurin E [glusterd-store.c:1845:glus​terd_store_retrieve_volume] 0-: Unknown key: brick-0
12:47 verdurin then
12:47 verdurin E [xlator.c:390:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
12:47 verdurin E [graph.c:292:glusterfs_graph_init] 0-management: initializing translator failed
12:48 kkeithley_ verdurin: fpaste your glusterd log
12:49 verdurin kkeithley_: http://fpaste.org/55055/
12:49 glusterbot Title: #55055 Fedora Project Pastebin (at fpaste.org)
12:49 kkeithley_ this? ---->   [2013-11-19  12:04:17.583412] W [rpcsvc.c:1389:rpcsvc_transport_create]  0-rpc-service: cannot create listener, initing the transport failed
12:50 kkeithley_ maybe you're fighting with selinux?
12:51 verdurin It's in permissive mode (for other reasons)
12:59 bma <kkeithley_> i have gluster-swift installed... i found that you can have container on the root of the volume, after that its key-value. i can have a key like container/father_key/son_key/object and associate its value
12:59 bma other thing... how can i upload a local object to gluster-swift?
12:59 bma i see how to create/update and set the content-lenght
13:00 bma and how to copy an object between containers... but not from local
13:01 verdurin kkeithley_: removing /var/lib/glusterd worked
13:02 Guest67977 joined #gluster
13:04 kkeithley_ verdurin: okay.
13:05 verdurin which doesn't really explain things, I know. Thanks anyway.
13:05 kkeithley_ bma: upload an object to a container with `curl -v -X PUT -T $filename -H 'X-Auth-Token: $authtoken' -H  'Content-Length: $filelen'  https://$myhostname:443/v1/AUTH_$my​volname/$mycontainername/$filename  -k`
13:06 kkeithley_ that's old, that's using tempauth. Not sure what you do with keystone
13:06 kkeithley_ I haven't looked at it in a while
13:06 bma <kkeithley_> thanks... i found this in the openstack swift API ... http://docs.openstack.org/api/openstack-object​-storage/1.0/content/folders-directories.html
13:06 glusterbot <http://goo.gl/XeX1AX> (at docs.openstack.org)
13:06 bma is it possible on gluster-swift?
13:07 kkeithley_ gluster-swift works exactly the same as ordinary swift. Only the back-end storage is different
13:07 kkeithley_ at least that's how it's supposed to be
13:08 vimal joined #gluster
13:12 dusmant joined #gluster
13:13 rwheeler joined #gluster
13:15 glusterbot New news from newglusterbugs: [Bug 1002020] shim raises java.io.IOException when executing hadoop job under mapred user via su <http://goo.gl/M8tYEm> || [Bug 1005838] Setting working directory seems to fail. <http://goo.gl/67SUJ3> || [Bug 1006044] Hadoop benchmark WordCount fails with "Job initialization failed: java.lang.OutOfMemoryError: Java heap space at.." on 17th run of 100GB WordCount (10hrs on 4 node cluster) <http
13:18 B21956 joined #gluster
13:21 calum_ joined #gluster
13:25 ndarshan joined #gluster
13:25 ira joined #gluster
13:32 tziOm joined #gluster
13:45 ipvelez joined #gluster
13:45 glusterbot New news from newglusterbugs: [Bug 1020183] Better logging for debugging <http://goo.gl/oLzwqB> || [Bug 854753] Hadoop Integration, Address known write performance issues <http://goo.gl/cV1FPd>
13:49 hagarth joined #gluster
14:02 getup- joined #gluster
14:11 rwheeler joined #gluster
14:15 glusterbot New news from newglusterbugs: [Bug 1032080] Enable quorum in profile virt <http://goo.gl/Ezeqvf>
14:17 diegol__ joined #gluster
14:18 jskinner_ joined #gluster
14:21 hagarth joined #gluster
14:21 baoboa joined #gluster
14:30 ipvelez hello good morning
14:31 ipvelez I am upgrading from 3.2.7 to 3.4.1 but now glusterd won't start
14:34 davidbierce joined #gluster
14:35 geewiz joined #gluster
14:36 ipvelez I see some errors in the log that say '[rpc-transport.c:253:rpc_transport_load] 0-rpc-transport: /usr/lib/glusterfs/3.4.1/rpc-transport/rdma.so: cannot open shared object file: No such file or directory
14:36 ipvelez [2013-11-19 13:38:04.585395] W [rpc-transport.c:257:rpc_transport_load] 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not valid or not found on this machine
14:37 ipvelez what may be the problem?
14:38 kkeithley_ you can ignore that if you don't have Infiniband gear. (or haven't specified rdma transport for any of your volumes)
14:39 ipvelez but the volume initialization fails
14:39 dbruhn joined #gluster
14:40 kkeithley_ @upgrade
14:40 glusterbot kkeithley_: I do not know about 'upgrade', but I do know about these similar topics: '3.3 upgrade notes', '3.4 upgrade notes'
14:41 kkeithley_ 3.2.x to 3.3 or 3.4 is not a simple upgrade. See ,,(3.3 upgrade notes)
14:41 glusterbot http://goo.gl/qOiO7
14:42 kkeithley_ and ,,(3.4 upgrade notes)
14:42 glusterbot http://goo.gl/SXX7P
14:42 bala joined #gluster
14:43 ipvelez ok thanks!! was just reading those right now!
14:46 glusterbot New news from newglusterbugs: [Bug 1032099] Packaging and embedding docs into plugin artifact. <http://goo.gl/SdAeMD>
14:46 peacock_ joined #gluster
14:49 davidjpeacock Hey people. :-)  I think I have a silly question.  I just started playing around trying to setup geo-replication but I'm tripping at an early hurdle.  `gluster volume geo-replication`... results in geo-replication being called an unrecognized word.  Any idea why?
14:53 davidjpeacock Did geo-replication get removed in version 3.4?
14:53 kkeithley_ no
14:54 davidjpeacock Thanks kkeithley_ :-)  Any idea on why the command can't be found?  I see no references in the online help either.
14:55 social davidjpeacock: only packages got split, you are probably missing glusterfs-geo-replication package
14:55 davidjpeacock Ah, duh.  That would explain it. :-)  Thanks for the pointer!
14:55 * social been there done that ,)
14:55 davidjpeacock Glad I'm not alone... ;-)
14:56 davidjpeacock yeah... cooking on gas now.
14:58 plarsen joined #gluster
14:58 social kkeithley_: have you seen this? http://paste.fedoraproject.org/55105/73043138 - those are last words from nodes hitting oom-kill but to be fair I think there's nothing usable in that log
14:58 glusterbot Title: #55105 Fedora Project Pastebin (at paste.fedoraproject.org)
14:58 vpshastry joined #gluster
15:01 kkeithley_ I've not seen that, but I've heard a rumor or two lately about gluster daemons getting oom-killed. There are these lines: [2013-11-18  13:24:47.022797] D [mem-pool.c:422:mem_get]  (-->/usr/lib64/libgfrpc.so.0(rpcsvc_notify+0x103) [0x7fd4dcac00d3]  (-->/usr/lib64/libgfrpc.so.0(​rpcsvc_handle_rpc_call+0x116)  [0x7fd4dcabfe56]  (-->/usr/lib64/libgfrpc.so.0​(rpcsvc_request_create+0x79)  [0x7fd4dcabfab9]))) 0-mem-pool: Mem pool is full. Callo
15:01 kkeithley_ so I'm not sure they're completely unusable
15:02 kkeithley_ this is a glusterfsd or the glusterd that's being oom-killed.
15:02 social kkeithley_: glusterd
15:02 kkeithley_ ?
15:03 social kkeithley_: we have this on production but I'm completly unable to reproduce it elsewhere, this is log from my closest reproduce :/
15:03 social kkeithley_: thing is it's oom-killing only one node - the first one where we run our monitoring which calls status/profile/heal commands quite often but not often enough to cause issues
15:04 kkeithley_ I'm curious about all the oplock Lock Returned -1 preceeding the pool full
15:04 kkeithley_ maybe the op lock is a red herring.
15:05 social kkeithley_: when I got this I threw in 3 node setup where 2 nodes had replica volume and were geo-replicating from node 2 to node 3. I threw in while loop with ~100 commands for status/heal/profile and I ran some traffic generator on volume
15:05 social kkeithley_: oom-kill happend after I killed the while loop and geo-replication from hung state (as it couldn't contact glusterd for while) kicked in
15:06 social but Mem pool is full. Calloc started whey before
15:07 kkeithley_ social: 3.4.1?
15:07 social kkeithley_: yep
15:07 kkeithley_ File a bug?
15:07 social I think I have against 3.3.1 one but not yet, I fail to reliably reproduce it :/
15:07 hagarth @fileabug
15:08 social I already have it against 3.3.1
15:09 social ah I thouhght this is related 841617 but actually isn't :/ ok I'll make one :)
15:09 kkeithley_ @factoid
15:10 kkeithley_ @factoids
15:10 kkeithley_ @bug
15:10 glusterbot kkeithley_: (bug <bug_id> [<bug_ids>]) -- Reports the details of the bugs with the listed ids to this channel. Accepts bug aliases as well as numeric ids. Your list can be separated by spaces, commas, and the word "and" if you want.
15:10 hagarth @file a bug
15:10 kkeithley_ not the droid I was looking for
15:11 hagarth @file
15:11 glusterbot hagarth: I do not know about 'file', but I do know about these similar topics: 'file attributes', 'files edited with vim become unreadable on other clients with debian 5.0.6', 'get the file attributes', 'small files', 'stale nfs file handle'
15:11 chirino joined #gluster
15:12 lpabon joined #gluster
15:12 ndevos kkeithley_: you want to file a bug?
15:12 glusterbot http://goo.gl/UUuCq
15:13 hagarth ndevos: i always seem to get this wrong ;)
15:13 ndevos hagarth: yeah, the "file a bug" one is different...
15:13 glusterbot http://goo.gl/UUuCq
15:14 ndevos @learn fileabug as Please file a bug at http://goo.gl/UUuCq
15:14 glusterbot ndevos: The operation succeeded.
15:14 ndevos hagarth: now you can use ,,(fileabug)
15:14 glusterbot hagarth: Please file a bug at http://goo.gl/UUuCq
15:15 ndevos @fileabug
15:15 glusterbot ndevos: Please file a bug at http://goo.gl/UUuCq
15:15 hagarth ndevos: thanks!
15:15 chirino_m joined #gluster
15:15 ndevos ~fileabug | hagarth
15:15 glusterbot hagarth: Please file a bug at http://goo.gl/UUuCq
15:15 ndevos hah! we'll get manu bugs this way
15:15 ndevos s/manu/many/
15:15 glusterbot ndevos: Error: I couldn't find a message matching that criteria in my history of 1000 messages.
15:16 hagarth ndevos: may their tribe increase :)
15:16 hagarth hopefully our bug fix rate also keeps up with that!
15:17 ndevos oh, yes, but that wont be easy!
15:17 social kkeithley_:  Bug 1032122
15:17 glusterbot Bug http://goo.gl/drW5GO unspecified, unspecified, ---, kparthas, NEW , glusterd getting oomkilled
15:17 kkeithley_ thanks
15:18 vpshastry left #gluster
15:18 social kkeithley_: I don't think that is helpful as its far from any small and reliable reproducer :/
15:19 kkeithley_ social: maybe, but it's something we can track without it being forgotten
15:19 kkeithley_ ndevos: no, what I want is a smarter glusterbot. And egg in my beer.
15:21 davidjpeacock social: kkeithley_: you guys are lead developers for gluster?
15:22 davidjpe_ joined #gluster
15:23 kkeithley_ hagarth, ndevos, and I are Red Hat employees working on gluster.
15:23 davidjpeacock Nice to meet you :-)
15:23 kkeithley_ Likewise
15:23 davidjpeacock Where are you based?
15:24 kkeithley_ There are other Red Hat employees lurking here too that I didn't name.
15:24 davidjpeacock joined #gluster
15:24 kkeithley_ I'm in the Boston area, hagarth is in Bangalore, ndevos is in Amsterdam I think.
15:24 * ndevos waves _o/ , but he is technically only fixing bugs and helping customers on Red Hat Storage
15:25 jbautista|brb joined #gluster
15:25 davidjpeacock Forgive my connection bouncing; this is not too conducive to conversation!
15:25 davidjpeacock I'm just outside Toronto
15:26 Guest19728 joined #gluster
15:26 davidjpeacock joined #gluster
15:26 davidjpeacock kkeithley: How long have you been working on Gluster?
15:27 kkeithley_ With gluster, since Feb 2011 on a layered product called HekaFS that we put in Fedora 17. Directly on gluster since the acquisition.
15:27 zaitcev joined #gluster
15:28 davidjpeacock HekaFS became Gluster?  Or was that previous tech?
15:28 Technicool joined #gluster
15:29 kkeithley_ No, HekaFS was just features we added to gluster, gluster already existed for several years.
15:29 davidjpeacock Ah ok, I see.
15:29 kkeithley_ hekafs features are trickling slowly into gluster. We finally merged encryption-at-rest (on disk) into the head of the tree last week.
15:29 social reminds me how is it with encryption ?
15:30 social I'd  love to see something like encryption at rest eg only clients being able to read datam is such thing possible with gluster?
15:30 davidjpeacock As you have gathered I'm still new to gluster but I'd love to help any way I can.
15:31 kshlm joined #gluster
15:31 kkeithley_ right, well, it's merged on the head of the tree. If you want to experiment with it now you can by checking out the bleeding edge sources.
15:32 chirino joined #gluster
15:33 kkeithley_ And encryption on the wire was released to the wild in 3.4.0
15:33 davidbierce joined #gluster
15:34 davidjpeacock joined #gluster
15:34 davidbierce joined #gluster
15:35 kkeithley_ No, HekaFS was just features we added to gluster, gluster already existed for several years. <obscure cultural reference>We liked it so much we bought the company</obscure cultural reference>
15:36 davidjpeacock hehe intriguing
15:36 Remco Saves a lot of time merging your changes every time something upstream changed
15:36 Remco Now you are the upstream, so you don't need to care :D
15:37 * davidjpeacock git pulls
15:39 davidjpe_ joined #gluster
15:39 davidjpeacock_ joined #gluster
15:39 mattapp__ joined #gluster
15:39 ndk joined #gluster
15:40 raghug joined #gluster
15:43 RameshN joined #gluster
15:44 kkeithley_ yeah, we never bothered to rebase HekaFS to 3.3.
15:44 ccha3 joined #gluster
15:46 glusterbot New news from newglusterbugs: [Bug 1032122] glusterd getting oomkilled <http://goo.gl/drW5GO>
15:47 compbio_ joined #gluster
15:47 Rydekull_ joined #gluster
15:48 the-me_ joined #gluster
15:48 Peanut___ joined #gluster
15:48 cicero_ joined #gluster
15:48 Ramereth|home joined #gluster
15:48 Ramereth|home joined #gluster
15:51 ultrabizweb_ joined #gluster
15:52 yosafbridge` joined #gluster
15:57 tqrst joined #gluster
15:58 chirino joined #gluster
15:58 tqrst is the recommendation for bricks still xfs with inode size 512?
15:59 wrale joined #gluster
16:00 kkeithley_ best practice is xfs, default inode size, mount with -o inode64.
16:01 kkeithley_ last I heard anyway
16:01 tqrst thanks
16:02 tqrst does the inode64 recommendation also apply to ext4?
16:02 tqrst (a lot of my older bricks are ext4)
16:03 kkeithley_ correction, inode size 512 (default is 256)
16:03 kkeithley_ -o inode64 is xfs only.
16:04 dusmant joined #gluster
16:04 kkeithley_ despite what the name might suggest, inode64 merely puts the metadata in the middle of the disk, not at the beginning.
16:05 tqrst right
16:05 tqrst just realized
16:13 tziOm joined #gluster
16:21 bulde joined #gluster
16:22 tqrst does a full heal also include a rebalance at the same time? (and does a rebalance imply a full heal?)
16:22 tqrst I just replaced two dead bricks and have been long overdue for a rebalance, so I figure I might as well do both
16:24 _pol joined #gluster
16:24 tqrst kkeithley_: also, aren't there potential issues with mixing inode64 and not-inode64 mounted bricks?
16:25 ThatGraemeGuy joined #gluster
16:27 mohankumar joined #gluster
16:27 davidbierce joined #gluster
16:30 rcheleguini joined #gluster
16:37 jskinner_ joined #gluster
16:38 kkeithley_ tqrst: not that I know of. Other than performance it should be transparent to gluster.
16:46 glusterbot New news from newglusterbugs: [Bug 1032172] Erroneous report of success report starting session <http://goo.gl/q14iUV>
16:47 sprachgenerator joined #gluster
16:48 davidbierce joined #gluster
16:50 chirino joined #gluster
16:55 zerick joined #gluster
16:59 ira joined #gluster
17:08 tomased joined #gluster
17:11 bulde joined #gluster
17:14 chirino joined #gluster
17:14 dbruhn joined #gluster
17:16 dbruhn joined #gluster
17:17 MacWinner i have 2 sites with 3 nodes that are geographically disperse and each site has a gluster cluster replica set of 3, is it safe to sync the 2 sites with something like unison?  basically pick one node from each site to be a unison pair?
17:18 MacWinner the data on each site is mainly read only.. very few write operations
17:18 tg2 joined #gluster
17:18 MacWinner not big files eatiher.. basically PDFs and docs less than 25 megs.. and JPGS about 200k each.. 10's of thousands of files
17:19 semiosis MacWinner: you should not write directly to bricks.  if you mean create a client mount on one node & point unison at that, should be fine.  dont point unison directly at a brick
17:19 raghug joined #gluster
17:19 MacWinner got it.. that's what I meant.. basically create client mount on both sides and unison between them
17:20 semiosis what happens if people at both sites write to the same file on their local cluster?
17:20 semiosis at the same time
17:20 tg2_ joined #gluster
17:20 MacWinner semiosis, chances of very unlikely in my scenerio..  i'm writing MD5 hash of file.. and i never over write it.
17:21 MacWinner i think unison has some sort of collision detection anyway
17:21 semiosis cool
17:21 semiosis you should be fine then
17:21 geewiz joined #gluster
17:21 MacWinner cool, thanks
17:21 semiosis yw
17:21 semiosis let us know how it goes
17:22 semiosis oh one more thing, since you have replica 3 you should enable quorum
17:22 jbrooks Hey guys, can anyone direct me to the best / most up to date docs on using gluster + cinder
17:22 T0aD no
17:24 davidjpeacock jbrooks: this is what was revealed to me when I searched http://www.gluster.org/community/docu​mentation/index.php/GlusterFS_Cinder
17:24 glusterbot <http://goo.gl/Q9CVV> (at www.gluster.org)
17:25 jbrooks davidjpeacock: that was revealed to me, as well -- that's from the grizzly time frame, I'll report back if I find something current
17:25 jbrooks thanks
17:25 davidjpeacock sorry I can't be more useful :-(
17:31 semiosis jbrooks: feel free to update that wiki page!
17:39 Clay-SMU joined #gluster
17:39 Clay-SMU .
17:40 Clay-SMU Anyone alive?
17:40 Mo__ joined #gluster
17:41 hagarth joined #gluster
17:54 semiosis hi
17:54 glusterbot semiosis: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:54 semiosis Clay-SMU: ^
17:56 Clay-SMU hehe, thanks semiosis, haven't been on IRC since 94 :)  I spotted the log bot link, I'm looking for my issue there.
18:01 LoudNoises joined #gluster
18:05 bulde joined #gluster
18:08 elyograg I think I've discovered what causes gluster to get mounted on a /tmp/mntXXXXX filesystem - changing quota limits.
18:09 elyograg the mounts seem to linger, though.
18:17 diegol__ joined #gluster
18:23 kanagaraj joined #gluster
18:28 kanagaraj joined #gluster
18:28 _pol joined #gluster
18:32 rwheeler joined #gluster
18:47 mistich joined #gluster
18:47 mistich any one know how to collect the client io stats in gluster?
18:49 aliguori joined #gluster
18:50 semiosis sigusr1
18:50 semiosis iirc
18:52 chirino joined #gluster
18:52 rotbeard joined #gluster
18:56 mistich anyone know how to collect the client io stats in gluster?
18:58 tqrst mistich: semiosis's message was for you, I think? (try SIGUSR1)
18:58 kkeithley_ sigusr1!    kill -USR1 `pidof glusterfs`
19:02 mistich will that give me io stats such as sar or iostat
19:03 Clay-SMU depends on the version of gluster, current version has performance mon built in
19:04 mistich 3.4.1 I am using
19:05 Clay-SMU So speaking of, anyone know if you can use gluster client for Xen (citrix) 6.2?  and which rpm should be used?
19:07 ipvelez_ joined #gluster
19:08 Clay-SMU my bad, looks like it only works on server http://gluster.org/community/documen​tation/index.php/Gluster_3.2:_Runnin​g_GlusterFS_Volume_Profile_Command
19:08 glusterbot <http://goo.gl/BpO6hT> (at gluster.org)
19:11 mistich yeah I can get all the server stats but not sure how to get client stats
19:11 Clay-SMU IB or TCP?
19:12 mistich TCP
19:12 andreask joined #gluster
19:14 Clay-SMU If you don't use TOE, you can monitor the bandwidth stats in TCP, not sure if that would give you what you need.  If it's TOE you're screwed, it's like a monitoring black hole
19:15 mistich lol ok thanks
19:16 ThatGraemeGuy_ joined #gluster
19:25 samppah is there a way to reset output of gluster volume top ?
19:26 samppah ah, gluster volume top volName clear :)
19:30 ipvelez_ joined #gluster
19:37 DV__ joined #gluster
19:38 bgpepi joined #gluster
19:47 glusterbot New news from newglusterbugs: [Bug 947153] [TRACKER] Hadoop Compatible File System (HCFS) <http://goo.gl/kdWN2N> || [Bug 909451] [RFE] Create a Web based File browser UI for any HCFS <http://goo.gl/dXMBYr>
19:48 tqrst I just replaced two dead bricks with empty drives and launched a full heal. Are there any potential risks to removing a lot of folders from my volume before the full heal is done? (I was in the middle of archiving a bunch of large folders when the drives failed)
19:54 dneary joined #gluster
20:01 hateya joined #gluster
20:06 robw8633 joined #gluster
20:07 robw8633 gluster newbie...can't stop/delete volume due to locks...advice?
20:10 robw8633 left #gluster
20:14 daMaestro joined #gluster
20:22 Guest19728 joined #gluster
20:24 _pol joined #gluster
20:39 MacWinner semiosis, i've been testing csync2+lsyncd as a possible solution rather than unison to sync the files around… seems like it could work pretty well to sync my 2 glusters across geographies
20:40 semiosis "Lsyncd watches a local directory trees event monitor interface (inotify or fsevents)" from https://code.google.com/p/lsyncd/
20:41 glusterbot Title: lsyncd - Lsyncd (Live Syncing Daemon) synchronizes local directories with a remote targets - Google Project Hosting (at code.google.com)
20:41 semiosis that works on a gluster client mount???
20:42 badone joined #gluster
20:51 ThatGraemeGuy joined #gluster
20:51 squizzi joined #gluster
20:55 Guest19728 joined #gluster
21:12 andreask joined #gluster
21:15 failshell joined #gluster
21:15 Clay-SMU I'm still not sure if I can use gluster client for Xenserver (citrix) 6.2?  and which rpm should be used? or is there a link to some doc's on either site I'm missing?
21:37 squizzi left #gluster
21:41 Liquid-- joined #gluster
21:57 zwu joined #gluster
22:18 [o__o] left #gluster
22:20 [o__o] joined #gluster
22:22 mistich joined #gluster
22:24 mistich anyone know how to watch the IO on a gluster volume on the client side same as you do with iostat
22:25 semiosis mistich: you could use iptraf
22:25 mistich that is network does it do IO
22:26 semiosis well gluster client is a network client
22:26 semiosis but i see what you mean
22:26 mistich I am not network bound have 9 gigs free :)
22:27 mistich but fuse client is using alot of cpu
22:27 mistich io on gluster nodes look good too
22:28 semiosis what are you trying to do?  whats the problem?
22:29 mistich trying to use gluster to store rrds and when it tries to read them very slow
22:31 failshel_ joined #gluster
22:31 mistich so as the application is running cpu on glusterfs is at 200%
22:31 fidevo joined #gluster
22:32 mistich so something is holding back the fuse client just can't find what is doing it
22:32 semiosis that's one of the more difficult kinds of workloads for glusterfs
22:32 dkorzhevin joined #gluster
22:32 semiosis ,,(pasteinfo)
22:32 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
22:33 mistich yeah I know! just want to make sure I don't miss anything before I trash the whole system
22:34 mistich http://ur1.ca/g2cjk is the info
22:34 glusterbot Title: #55246 Fedora Project Pastebin (at ur1.ca)
22:35 semiosis why did you turn read-ahead off?
22:35 semiosis seems like that might be helpful
22:35 semiosis idk for sure tho
22:35 mistich tried it both ways didn't help either way
22:35 semiosis ah
22:40 mistich an other suggestions?
22:42 mistich I even added ssd to all the gluser servers still did not help
22:47 JoeJulian Use an rrd engine written in java and modify it to use the libgfapi jni?
22:47 * JoeJulian is not a java guy, so I don't have any clue what that involves.
22:48 badone joined #gluster
22:49 mistich me either :)
22:49 JoeJulian semiosis might though...
22:50 mistich Just caught it been a long day
22:52 remlabm joined #gluster
22:53 remlabm hello,.. looking to see if gluster can help with my issue.. i have two filers.. NFS mounts on a server.. i need to replicate from one mount to another.. i know its traditional to have a gluster server infront of each mount.. however im trying to accomplish with 1 server as the "daemon"
22:54 remlabm *service
22:55 JoeJulian I think the problem with what I'm understanding of your description would be that I don't think you can write the extended attributes needed over nfs.
22:57 remlabm i have 2 mounts.. x.x.x.x:/data x.x.x.x:/data.. i need them to replicate.. however they are true network filers.. and are out of my control. so i could use rsync.. but thats not a reliable solution
22:59 semiosis JoeJulian: lol
23:01 remlabm i guess what im asking is.. there a way to replicate with gluster with out having a gluster server infront of it.. just from a regular share?
23:03 JoeJulian What are these filers?
23:03 JoeJulian remlabm: ^?
23:03 JoeJulian no
23:03 remlabm netapp filers
23:03 remlabm and the other is isilan
23:04 remlabm its a weird problem to have.. i know.. been banging my head off the desk for a few days now
23:04 JoeJulian Those both do iSCSI. Why not just create luns, mount those and put a filesystem on it, then use gluster on that?
23:05 remlabm can you elaborate a bit?
23:06 remlabm sorry.. just checked.. cant use iSCSI on this netapp..
23:06 remlabm its a deprecated filter... EOL'ed
23:06 remlabm filer*
23:06 remlabm this is a stop gap fix
23:07 JoeJulian Then the last idea I have would be to make a large image file on it, attach that to a loopback device, put a filesystem on that, mount it, and use that behind gluster.
23:08 JoeJulian Or just take all the disks out of your netapp and use them as bare metal... ;(
23:08 JoeJulian ;)
23:08 JoeJulian Gah! I can't type today!
23:08 JoeJulian @meh
23:09 glusterbot JoeJulian: I'm not happy about it either
23:09 dbruhn remlabm, are you just trying to replicate your data from an old netapp box to an isilon system?
23:09 remlabm yup
23:09 dbruhn or did I miss a bunch of requirements here
23:09 remlabm basically
23:09 dbruhn is it a new isilon system?
23:09 remlabm but it needs to live for a few weeks
23:09 remlabm define new
23:10 remlabm ... :)
23:10 dbruhn as in still under support and still having a isilon rep trying to buy you lunch
23:10 dbruhn lol
23:10 remlabm haha.. yea its the new location
23:10 remlabm and will be long term
23:10 dbruhn well then... do what I've done... make it Isilons problem
23:10 dbruhn lol
23:10 diegol__ joined #gluster
23:10 remlabm hahaha
23:11 dbruhn Seriously just call their support, or email their support and ask them for a temporary solution for the migration
23:11 dbruhn They've done it before
23:12 dbruhn They offered to move 100TB of data off of an old Exanet system for me to Isilon as part of my purchase
23:12 dbruhn That's the nice thing about using expensive storage vendors
23:13 dbruhn How much data are you talking here?
23:13 dbruhn and is it logically defined?
23:14 remlabm well see theres the issue... the netapp is actually being split.. some data is going to 2x isilons in 2 locations
23:15 dbruhn Ok... that's not an issue so far...
23:15 remlabm oh and i think its > 100TB
23:15 dbruhn I guess what I am asking, is, could you chunk the data out into smaller chunks to the new location use rsync and then switch over the stuff you need as you can?
23:15 dbruhn more manual work, but it will get the job done
23:15 remlabm it needs to live for a few months
23:16 remlabm until the 2nd location is up
23:16 dbruhn yeah, just script the chunks
23:16 dbruhn rsync is your bottleneck typically not the servers/etc. you could easily spin up several threads and move the data that way.
23:16 remlabm if i could figure out how to use snapmirror to omit directories on the volume.. that would fix all my issues
23:16 dbruhn just wouldn't want to do a single sync on a top level
23:18 dbruhn 100TB of out of support net app, what is that like 4 racks worth of equipment?
23:18 dbruhn lol
23:18 remlabm :)
23:18 dbruhn gluster isn't the tool you are looking for though
23:19 dbruhn you should replace all that isilon stuff with gluster anyway
23:19 remlabm yea im noticing... its just a stop gap.. to help with some other issues we need to srt first
23:19 dbruhn I can't lie, I've had Isilon, stuff's not bad
23:20 remlabm thanks guys.. back to the drawing board
23:20 remlabm haha
23:25 cfeller JoeJulian: your comment from yesterday "...how about adding the mount option fetch-attempts=600", did the trick.
23:26 cfeller I didn't set it to 600, but I set it to 3, and that was sufficient.  I've rebooted a half dozen times, w/out issue.
23:26 cfeller Your point that some networking chipsets taking longer to initialize is interesting.  Although it is a Dell (albeit slightly older) server (so I think enterprise grade, as is everything else around here), so I was surprised to think that might be the problem
23:26 cfeller but nevertheless, thanks for that.
23:29 remlabm left #gluster
23:32 mattapp__ joined #gluster
23:35 _pol joined #gluster
23:37 elyograg my testbed is only writing data at about 5MB/s.  It's got linux software RAID5 on four disks.  on that raid5 are eight LVM logical volumes, to mirror production.  I suspect that the slowness is a combination of write contention due to there being so many logical volumes, plus the raid5 write penalty.  does that sounds reasonable?  The network is gigabit and is showing error-free on both the switches and linux.  The volume is mounted via localhost.
23:48 Clay-SMU anyone know the best way to mount gluster volume on xen 6.2 (via xe sr-create)  I've got it all working to a regular mount point, but at a loss on how to make it available to xen
23:53 leblaaanc joined #gluster
23:54 leblaaanc Hey guys… Quick question… if I just want to move a brick (literally the file path on the same server) do I … gluster volume remove-brick replica 1 … then add it back?
23:54 JoeJulian cfeller: cool, glad it worked. Sometimes you can get around that by fixing your switch port to a specific speed. (or maybe you have a bad cable)
23:57 JoeJulian leblaaanc: What I did was.... kill the glusterfsd process for that brick. Do a "replace-brick ... commit force" moving the old brick to the new location. kill glusterfsd for that brick (again). unmount the brick from the old location, mount it in the new location, then "volume start $vol force" to start the brick again.
23:58 leblaaanc guessing that stops the volume?
23:59 JoeJulian Not if it's a replicated one.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary