Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 leblaaanc Okay so you are stopping the glusterfsd process on brick1, replacing the brick location "forcefully I presume because it doesn't exist yet", killing the fsd process because it started back up, moving your brick to the new location, then starting the volume (which should still be running actually which again I presume is why the "force").
00:02 JoeJulian correct
00:03 leblaaanc could you go the route of removing the brick, setting replica to 1, adding the brick, setting replica to 2, then rebalance?
00:04 leblaaanc (this is just for my own curiousity) :)
00:05 JoeJulian If you only have two bricks, you wouldn't even need a rebalance. I have 12 bricks, replica 3, so I had to do it the way I did (or have down time).
00:06 leblaaanc so a remove-brick replica 1 on a 2 brick volume should just have it limp along on 1 volume
00:07 JoeJulian yes
00:11 leblaaanc how do I adjust the replica count? I didn't quite see this.
00:11 leblaaanc gluster volume set vol replica 2 ?
00:13 JoeJulian gluster volume remove-brick replica 1 $old_brick
00:14 JoeJulian When you add it back in, you'll need to trick gluster. It'll complain about the path being part of a volume. The way to avoid that is to not have it mounted when you add the new brick back.
00:14 JoeJulian gluster volume add-brick replica 2 $new_brick
00:14 leblaaanc hrm.. when I removed the brick it made the volume crap out
00:14 JoeJulian Then kill the glusterfsd for that brick, mount your brick, volume start force.
00:15 JoeJulian well that's not supposed to happen...
00:15 leblaaanc going from replicate to distribute should be okay?
00:15 JoeJulian yep
00:18 leblaaanc so theoretically I should be able to just have glusterfs share a single brick
00:18 leblaaanc curently.
00:22 JoeJulian yes
00:23 leblaaanc weird..
00:23 leblaaanc mount failed volume is up though
00:23 JoeJulian restart glusterd
00:24 leblaaanc what am I checking ? Mount failed. Please check the log file for more details.
00:24 leblaaanc sudo mount -t glusterfs sld-wowza-1:wowza /srv/wowza
00:25 JoeJulian /var/log/glusterfs/srv-wowza.log
00:26 leblaaanc http://pastie.org/private/6xfooxbmaelhm6sl082bq
00:26 glusterbot Title: Private Paste - Pastie (at pastie.org)
00:27 gdubreui joined #gluster
00:36 leblaaanc bleh
00:36 leblaaanc i hate messing with gluster.. it's so unintuitive
00:41 JoeJulian Sorry, off doing $dayjob
00:43 JoeJulian leblaaanc: Since you're down, you might as well stop and start the volume. It looks like your remaining brick is either not running or is not communicating with glusterd.
00:43 JoeJulian Is this 3.4.0?
00:44 leblaaanc 3.3.1
00:44 JoeJulian 3.3.2 is current, btw...
00:45 JoeJulian 3.4.1 is the ,,(latest)
00:45 glusterbot The latest version is available at http://goo.gl/zO0Fa . There is a .repo file for yum or see @ppa for ubuntu.
00:46 leblaaanc ya I haven't upgraded.
00:49 JoeJulian My assumption is that you've hit some bug. It's not one that I've heard of before though.
01:05 dbruhn lol, OT but can I get a win with hardware today
01:05 JoeJulian :)
01:05 _BryanHm_ joined #gluster
01:06 dbruhn I have a super micro server that I can't seem to make work with a USB dvd drive, and a stack of HP servers that are recognizing the 3TB drives loaded in it as 2.2TB each...
01:07 JoeJulian bios option for the HPs?
01:07 dbruhn firmware update seems to be the path to resolution, but HP really makes that a complicated process trying to find the magic boot dvd that will use their goofy firmware patches
01:07 JoeJulian ah
01:07 JoeJulian One more reason I like my asus servers.
01:08 dbruhn Honestly the dell stuff I have has been fine, this stack of HP stuff is for a single customer. It's about 245TB of raw disk waiting to be set up.
01:08 dbruhn I am not sure whats up with the super micro server, it's one of my dev servers that's be booted off the same drive a billion times
01:09 dbruhn Just one of those days I guess
01:09 JoeJulian mtbf 999999999 cycles...
01:10 JoeJulian I need to run to the store and turn my neighborhood blue along the way... (ingress).
01:11 dbruhn lol
01:11 dbruhn two stroke?
01:11 dbruhn please tell me two stroke
01:11 JoeJulian Lol, no...
01:11 JoeJulian http://ingress.com
01:11 glusterbot Title: Ingress (at ingress.com)
01:11 dbruhn damn, that's my favorite way to make a neighborhood blue
01:12 dbruhn hmm neat, never heard of it
01:12 JoeJulian want an invite?
01:12 dbruhn iphone
01:13 dbruhn otherwise I would say yet
01:13 dbruhn yes
01:13 JoeJulian And you probably would. Google just changed the keyboard layout so there's a lot of typos going around...
01:14 JoeJulian Lukily, android is an open platform and I can just use another keyboard.
01:14 dbruhn haha
01:16 dbruhn I was carrying an Android and an iPhone for a while, but the iPhone seemed to work better and all I really do is text, email, and call on the thing. Once in a while some ssh if it's needed, or a game of words with friends but not a huge phone person.
01:17 dbruhn The android is now mounted to a motorcycle in a water proof bag being used as a gps speedometer on one of my motorcycles.
01:17 JoeJulian Yep. Whichever tool solves your problem is always my philosophy.
01:17 dbruhn It looks pretty there, and out of place.
01:18 daMaestro joined #gluster
01:27 mattapp__ joined #gluster
01:31 bala joined #gluster
01:39 bala1 joined #gluster
01:40 raghug joined #gluster
01:58 harish_ joined #gluster
02:03 Eco_ joined #gluster
02:15 asias_ joined #gluster
02:20 kevein joined #gluster
02:22 Eco_ joined #gluster
02:30 hagarth joined #gluster
02:52 lalatenduM joined #gluster
02:59 kshlm joined #gluster
03:10 chirino joined #gluster
03:14 bennyturns joined #gluster
03:28 sgowda joined #gluster
03:30 johnmark JoeJulian: pingy
03:30 kanagaraj joined #gluster
03:36 edong23 joined #gluster
03:38 Guest19728 joined #gluster
03:38 premera_t joined #gluster
03:43 bharata-rao joined #gluster
03:47 shubhendu joined #gluster
03:48 Criticalhammer joined #gluster
03:49 bulde joined #gluster
03:51 Criticalhammer Hi everyone, I have CPU question to ask. What impact does CPU and Core# have on a gluster setup? I understand that more cores and faster CPU is always better but when should I start focusing on other things like RAID setup, disk controller, memory, etc...
03:52 RameshN joined #gluster
03:56 Criticalhammer In terms of CPU I was thinking a 1 xeon 1.8 quad core would work well, not becoming a bottle neck, in a brick with a HW RAID controller.
04:03 dusmant joined #gluster
04:07 itisravi joined #gluster
04:13 shylesh joined #gluster
04:13 asias_ joined #gluster
04:17 Criticalhammer left #gluster
04:25 AndreyGrebenniko joined #gluster
04:25 mohankumar joined #gluster
04:26 mattapp__ joined #gluster
04:43 mattappe_ joined #gluster
04:43 shruti joined #gluster
05:01 _pol joined #gluster
05:04 saurabh joined #gluster
05:05 hateya joined #gluster
05:08 mattapp__ joined #gluster
05:10 ppai joined #gluster
05:14 hagarth joined #gluster
05:16 bala joined #gluster
05:19 raghu joined #gluster
05:21 spandit joined #gluster
05:25 satheesh joined #gluster
05:25 Liquid-- joined #gluster
05:25 vpshastry joined #gluster
05:29 lalatenduM joined #gluster
05:41 nshaikh joined #gluster
05:47 rastar joined #gluster
05:49 shri joined #gluster
05:51 shri hagarth: ping ... Hi
05:51 hagarth shri: pong
05:51 shri hagarth: I tried that server.allow-insecure: on ... and tried to laucnh Nova Instance
05:52 shri but still openstack uses Mounted Glusterfs !
05:52 shri hagarth: I checked in ps aux | grep qemu and for device file it uses Glusterfs mount point
05:52 hagarth shri: interesting, is this the stock qemu that comes with fedora 19?
05:53 shri hagarth:means .. stock qemu ??
05:54 shri hagarth: my F19 have this qemu-kavm rpm -- qemu-kvm-1.4.2-13.fc19.x86_64
05:54 shubhendu joined #gluster
05:55 shri hagarth: also on my setup I have set CINDER_DRIVER=glusterfs  &   volume_driver = .....GlusterfsDriver
05:55 shri hagarth: have you set simillar on packstack when you have tried ?
05:55 hagarth shri: this is the qemu that gets packaged in fedora right?
05:56 hagarth shri: yes, those configuration changes are necessary
05:57 shri hagarth: let me check F19 iso.. becuase I believe that devstack->stack.sh may installed latest qemu..
05:58 bulde joined #gluster
05:59 shri hagarth: yeah F19 have this qemu packages - qemu-kvm-1.4.2-3.fc19.x86_64.rpm
05:59 shri just checked ISO
06:00 hagarth shri: ok
06:00 shri hagarth: and my /etc/nova/nova.conf have these variable -- qemu_allowed_storage_drivers = [gluster] & source_ports = ['24007']
06:01 hagarth shri: do you know if nova attempts libgfapi at all and then falls back on fuse? anything in the logs to indicate that?
06:02 shri hagarth: where I can get libgfapi related log... is there any specific log file for libgfapi ?
06:02 shri hagarth: by default I'm checking in /opt/stack/data/logs  directory
06:03 CheRi joined #gluster
06:05 shri hagarth: and I'm using below nova command to Invoke/launch instance after creating bootable cinder volume
06:05 shri nova boot --flavor 2 --image <iso_image_id>  --block_device_mapping vda=<cinder_volume_id>:::0 Instance_name
06:08 hagarth shri: that looks right
06:10 hagarth shri: I will try through devstack in a day or two and let you know if i encounter the same problem
06:12 psharma joined #gluster
06:14 satheesh joined #gluster
06:14 raghu joined #gluster
06:15 shri hagarth: yeah thanks.... if possible I will try with packstack.. OK !
06:15 shri hagarth: just one small help .. how I came to know that Nova has started using libgfapi.. ?
06:15 hagarth shri: sounds good
06:15 shri hagarth: is there any specific log
06:16 hagarth shri: looking at the qemu gluster URI is a good way
06:21 shyam joined #gluster
06:34 shri hagarth: Thanks.. let me check some more logs..
06:37 asias_ joined #gluster
06:52 shyam joined #gluster
06:54 administrator joined #gluster
06:56 Guest75687 left #gluster
06:57 Potjiekos joined #gluster
07:05 Eco_ joined #gluster
07:06 ricky-ti1 joined #gluster
07:12 ababu joined #gluster
07:13 ndarshan joined #gluster
07:17 pkoro joined #gluster
07:21 jtux joined #gluster
07:24 shri joined #gluster
07:54 shubhendu joined #gluster
07:57 ngoswami joined #gluster
08:02 getup- joined #gluster
08:03 ctria joined #gluster
08:04 klaxa|work joined #gluster
08:27 keytab joined #gluster
08:27 haritsu joined #gluster
08:32 aravindavk joined #gluster
08:34 _pol joined #gluster
08:38 mgebbe_ joined #gluster
08:44 glusted joined #gluster
08:47 lalatenduM joined #gluster
08:50 glusterbot New news from newglusterbugs: [Bug 1032438] frame_fill_groups intermittently fails to populate frame->root->groups correctly <http://goo.gl/HMvI0V>
08:50 andreask joined #gluster
08:51 shri_ joined #gluster
08:54 sgowda joined #gluster
08:55 dusmant joined #gluster
08:56 ricky-ticky1 joined #gluster
08:58 ndarshan joined #gluster
08:58 RameshN joined #gluster
09:01 shyam joined #gluster
09:11 calum_ joined #gluster
09:11 shruti joined #gluster
09:15 asias joined #gluster
09:18 vshankar joined #gluster
09:22 ndarshan joined #gluster
09:24 RameshN joined #gluster
09:25 dusmant joined #gluster
09:27 _pol joined #gluster
09:35 Rio_S2 joined #gluster
09:40 geewiz joined #gluster
09:45 vpshastry1 joined #gluster
09:49 meghanam joined #gluster
09:49 meghanam_ joined #gluster
09:56 bala joined #gluster
09:56 shyam joined #gluster
09:57 hagarth joined #gluster
09:58 shri hagarth: ping... u there
10:11 andreask joined #gluster
10:13 ndarshan joined #gluster
10:14 diegol__ joined #gluster
10:22 _pol joined #gluster
10:24 shubhendu joined #gluster
10:33 raghug joined #gluster
10:40 shyam joined #gluster
10:42 ndarshan joined #gluster
10:43 harish_ joined #gluster
10:51 gdubreui joined #gluster
10:53 Guest67977 joined #gluster
10:59 bala joined #gluster
11:06 hagarth joined #gluster
11:07 hagarth shri: pong, around now
11:12 shyam joined #gluster
11:16 _pol joined #gluster
11:20 vpshastry1 joined #gluster
11:30 lpabon joined #gluster
11:50 ira joined #gluster
11:52 rcheleguini joined #gluster
11:55 ipvelez joined #gluster
11:55 DoctorWedgeworth joined #gluster
11:57 DoctorWedgeworth I've got a gluster export which is distributed, but at one point I think it must have been mirrored because some files are on both servers. I want to change it to be mirrored again without downtime (if possible), does gluster have a way of doing this or am I going to have to start again?
11:58 hagarth1 joined #gluster
12:02 shyam joined #gluster
12:02 andreask joined #gluster
12:10 _pol joined #gluster
12:14 andreask joined #gluster
12:17 rastar joined #gluster
12:23 ppai joined #gluster
12:23 meghanam joined #gluster
12:23 meghanam_ joined #gluster
12:27 raghug joined #gluster
12:33 pk joined #gluster
12:44 morse joined #gluster
12:46 geewiz joined #gluster
12:46 vpshastry joined #gluster
12:49 kkeithley1 joined #gluster
12:49 kkeithley1 left #gluster
12:52 vpshastry left #gluster
12:53 shireesh joined #gluster
12:56 B21956 joined #gluster
13:00 shubhendu joined #gluster
13:04 _pol joined #gluster
13:12 keerthi joined #gluster
13:13 keerthi can some help me How to set object storage in glusterfs
13:15 shri joined #gluster
13:17 shri hagarth1: you there..
13:19 dusmant joined #gluster
13:25 shireesh joined #gluster
13:25 hagarth1 shri: yes
13:27 shri hagarth: Hi
13:27 shri hagarth: just one update
13:27 shri hagarth: on my system qemu-kvm + gluster works
13:28 shri hagarth: but it won't work witj openstack
13:28 shri when openstack launch nova instance with libvirt/qemu-kvm
13:28 shri hagarth: so it look may be issue with devstack only !
13:29 shri hagarth: also I check libvirt/qemu/instance00001.log for nova instance to verify if there are any errors .. but could not found any error/failures
13:30 keerthi Can someone help me how to setup glusterFS as Swift Object Storage
13:32 shri hagarth: Really Need How-To now :)
13:32 hagarth shri: absolutely, let me play around with devstack and get back
13:33 cyberbootje joined #gluster
13:34 shri hagarth: sure.. I also try to debug.. and will try packstack as well if it support F19
13:35 shri hagarth: Thanks for Help !
13:36 vpshastry1 joined #gluster
13:38 chirino joined #gluster
13:42 hagarth shri: thanks!
13:42 hagarth keerthi: this might be of help - https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md
13:42 glusterbot <http://goo.gl/P43Nqx> (at github.com)
13:43 vpshastry1 left #gluster
13:51 kanagaraj joined #gluster
13:53 bennyturns joined #gluster
13:57 GabrieleV joined #gluster
13:58 _pol joined #gluster
14:12 shubhendu joined #gluster
14:20 mattapp__ joined #gluster
14:20 JonathanD joined #gluster
14:29 mattapp__ joined #gluster
14:35 keerthi hagarth: thank you so much
14:38 hagarth keerthi: good luck with that, let us know if you need more help.
14:39 keerthi sure thank you
14:42 social I seem to be unable to rebuild 3.5qa1 in our buildsystem, it always fails on i686, any idea? http://paste.fedoraproject.org/55413/58496138
14:42 glusterbot Title: #55413 Fedora Project Pastebin (at paste.fedoraproject.org)
14:45 failshell joined #gluster
14:51 vshankar joined #gluster
14:53 _pol joined #gluster
14:55 satheesh joined #gluster
14:56 bugs_ joined #gluster
14:58 satheesh2 joined #gluster
14:59 stickybitt joined #gluster
14:59 kaptk2 joined #gluster
14:59 kkeithley joined #gluster
15:03 wushudoin joined #gluster
15:05 stickybitt Hmm, on Gluster 3.4.0 during a self heal I'm seeing a lot of this in my `gluster volume heal <VOLUME> info`: <gfid:602d99cc-65b0-47eb-bffd-9ed982177c2e>
15:05 stickybitt When the heal first started I was seeing some file names, but not anymore.
15:26 zaitcev joined #gluster
15:29 Liquid-- joined #gluster
15:33 mistich has anyone seen where gluster stop a process when writing too much I moved the file system on the gluster nodes from ext3 to xfs and now the process that is writing to the gluster nodes dies and cannot kill it till I kill glusterfs
15:35 raghug joined #gluster
15:36 zerick joined #gluster
15:41 dusmant joined #gluster
15:46 neofob joined #gluster
15:50 dbruhn joined #gluster
16:01 bulde joined #gluster
16:04 Liquid-- joined #gluster
16:08 kshlm joined #gluster
16:09 satheesh1 joined #gluster
16:11 bgpepi joined #gluster
16:12 giannello joined #gluster
16:14 giannello hi all
16:14 giannello is it normal for gluster to be restarted when deleting a volume?
16:15 giannello I have 2 volumes, created a 3rd one for testing, when I deleted it...all the connections to volume 1 and 2 dropped
16:15 giannello (NFS connections using the build-in translator)
16:16 giannello I can see in the glustershd.log that the process is restarting
16:17 hagarth joined #gluster
16:23 satheesh3 joined #gluster
16:25 kkeithley glustershd.log is for the Self Heal Daemon
16:25 _pol joined #gluster
16:26 giannello same thing for nfs.log
16:28 giannello and it happens also on volume create
16:29 giannello so the question is: is that normal? will that happen also when using gluster fuse client?
16:31 criticalhammer joined #gluster
16:33 criticalhammer Hi everyone. I have a gluster question to ask. What is everyone's overall experience with CPU usage while maintaining a gluster cluster? I've read up on CPU usage and noticed that some admins get high CPU usage for, sometimes, unknown reasons. How does CPU load scale with more nodes? Also is there a diminishing returns with the increase # of CPU cores?
16:48 XpineX_ joined #gluster
16:53 Guest19728 joined #gluster
17:05 aliguori joined #gluster
17:11 neofob criticalhammer: i get high cpu usage in healing process
17:11 tqrst I've only experienced high cpu usage during healing and rebalancing
17:11 tqrst and memory leaks during rebalancing
17:35 criticalhammer neofob and tqrst: We where leaning towards a purely distributive environment, seeing as the data stored is not mission critical, overall disk space is more critical.
17:35 criticalhammer tqrst: memory leaks, really? What version are you using?
17:36 mistich as anyone seen where gluster stop a process when writing too much I moved the file system on the gluster nodes from ext3 to xfs and now the process that is writing to the gluster nodes dies and cannot kill the process till I kill glusterfs
17:39 criticalhammer neofob: If you don't mind me asking, whats your node specs? How many cores do you have per node?
17:40 neofob i have only 4 core/node and i have two nodes for replica 2
17:41 criticalhammer How often do you re-balance? Have you noticed performance degradation while rebalancing?
17:42 neofob criticalhammer: my experience with my home server on different configs is that at least 1 core/brick
17:42 keet joined #gluster
17:43 kkeithley you only need to rebalance after you add or remove a brick (or a replica set)
17:43 criticalhammer Okay, thanks neofob
17:43 johnmark woah... most recent numbers are in. Our gfapi-QEMU integration is pretty hot
17:44 criticalhammer gfapi
17:44 johnmark libgfapi - the client library released with 3.4
17:44 mistich anyone know of instructions on moveing filesystem from ext3 to xfs without lossing data
17:44 keet Can someon help me after I set new ip address gluster is not starting
17:45 samppah johnmark: yes it is indeed! can't wait that there is support for snapshots in libvirt and all other stuff :P
17:47 keet my volume is not accessable
17:48 criticalhammer what do the logs say keet?
17:48 keet transport endpoint is not connected glusterfs
17:48 keet [2013-11-20 17:14:59.378132] W [glusterfsd.c:1002:cleanup_and_exit] (-->/usr/sbin/glusterd(main+0x5d2) [0x406802] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xb7) [0x4051b7] (-->/usr/sbin/glusterd(glusterfs_process_volfp+0x103) [0x4050c3]))) 0-: received signum (0), shutting down
17:49 johnmark samppah: word :)
17:54 keet its not starting
17:54 keet Connection failed. Please check if gluster daemon is operational
17:54 mistich anyone know of instructions on moving filesystem from ext3 to xfs without losing data
17:56 tg2 joined #gluster
17:56 kkeithley mistich: do you mean converting an ext3 fs to an xfs fs in place?
17:56 mistich yes
17:57 mistich gluster volume remove-brick volume brick
17:57 mistich I think this is the command
17:57 mistich remove the brick then reformat add brick back then keep on till all bricks reformated
17:59 keet [2013-11-20 17:30:39.306185] E [glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2013-11-20 17:30:39.306431] E [xlator.c:390:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2013-11-20 17:30:39.306494] E [graph.c:292:glusterfs_graph_init] 0-management: initializing translator failed [2013-11-20 17:30:39.306538] E [graph.c:479:glusterfs_
18:00 Liquid-- joined #gluster
18:00 kkeithley You might be able save-and-restore with cpio.  after you take a backup of the brick, mkfs.xfs, then restore the contents. I haven't tried it myself so I can't recommend it.
18:01 kkeithley but there's no magic tool that will convert an ext3 fs to an xfs fs.
18:01 mistich ok going to try on a test box
18:04 keet criticalhammer:
18:08 keet can someone help me after I change the IP I could not able to start the service
18:11 keet gluster
18:12 Clay-SMU check your host files and/or DNS and make sure you can ping between nodes
18:13 Clay-SMU look in /var/log/glusterfs for more detailed errors
18:19 tqrst criticalhammer: 4.1, but I've had that problem since 3.2
18:20 tqrst er, 3.4.1
18:21 sroy__ joined #gluster
18:24 Clay-SMU So if there are any xenserver experts, I'm trying to add a gluster client to xenserver, so far I've got it mounted but need to build a storage repository (xe sr-create) to make it a target for VM's.   I'm thinking that if I can assign it to a volume group that would work, just no idea how to bridge from the client to a vg
18:26 Jestir88 joined #gluster
18:27 aib_233 joined #gluster
18:32 criticalhammer sorry keet beyond looking at logs and network issues, I can't be any more help. I'm new to gluster myself
18:33 rotbeard joined #gluster
18:33 criticalhammer thanks tqrst for the info
18:33 keet no problem any how I fix the problem
18:33 criticalhammer how did you fix it?
18:34 keet problem is when I create the volume I mention the IP address so after I reboot the machine
18:34 keet I got another IP address
18:34 keet so myvolume was bind with that IP address
18:34 criticalhammer right. Yeah either use dns or static IP addys
18:35 criticalhammer i actually ran into that exact issue the very first time I glustered
18:37 ndk joined #gluster
18:41 keet criticalhammer:
18:42 keet so I just deleted the /var/lib/glusterd
18:42 keet glusterd.info
18:43 jskinner_ joined #gluster
18:45 criticalhammer you could have modified the ip addresses inside the vol file
18:47 criticalhammer but yeah that works just as well
18:55 Clay-mobi joined #gluster
18:56 ipvelez joined #gluster
19:00 ipvelez hello, I am having problems starting glusterd after an upgrade
19:01 hagarth1 joined #gluster
19:01 ipvelez could you guysplease check out the error that appears on the logfile to enlighten me on what may be the problem?   here is the error text: http://pastie.org/private/tdbevlii53c9rzb3ftawa
19:02 glusterbot Title: Private Paste - Pastie (at pastie.org)
19:03 andreask joined #gluster
19:10 RedShift joined #gluster
19:26 hateya_ joined #gluster
19:29 _pol joined #gluster
19:33 Clay-SMU is your replication network different than the client network?
19:45 ipvelez it's just two amazon ec2 servers
19:45 ipvelez this happens on the second server
19:47 ipvelez I tried with 'glusterd --debug' and there is an error that says '<ip of the 1st server> is not local'
19:48 ipvelez I don't understand since I *can* reach that server from the one that is giving errors
19:51 dbruhn joined #gluster
19:52 JoeJulian ipvelez: "E [glusterd-store.c:2487:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore" would suggest that a hostname assigned to one of your bricks is not able to be resolved on that server.
20:00 ipvelez thanks for the answer, what hostname should I use?  I had used the private amazon ec2 ip address
20:06 daMaestro joined #gluster
20:13 cjh973 is there a good place to look besides hekafs for developer docs?  i'm finding the nearly complete lack of comments in the code frustrating
20:17 Clay-SMU @ipvelez not sure, maybe put the ip in the /etc/hosts  if you are sure it's static
20:21 Clay-SMU @cjh973 maybe ask in #gluster-dev
20:26 bugs_ joined #gluster
20:30 dbruhn joined #gluster
20:38 chirino joined #gluster
20:39 diegol__ joined #gluster
20:41 badone joined #gluster
20:43 chirino joined #gluster
20:44 diegol__ joined #gluster
20:45 mistich my app keeps dying at the same place and I have to kill it and gluster to kill the app here is the strace from it  http://ur1.ca/g2hvh
20:45 glusterbot Title: #55532 Fedora Project Pastebin (at ur1.ca)
20:45 mistich an suggestions
20:49 diegol__ joined #gluster
21:00 diegol__ joined #gluster
21:10 diegol__ joined #gluster
21:15 diegol__ joined #gluster
21:18 dbruhn joined #gluster
21:20 diegol__ joined #gluster
21:26 diegol__ joined #gluster
21:30 raghug joined #gluster
21:30 cjh973 Clay-SMU: ok thanks
21:41 sroy_ joined #gluster
21:43 Clay-mobi joined #gluster
22:03 dbruhn joined #gluster
22:05 geewiz joined #gluster
22:10 Mo__ joined #gluster
22:15 Clay-mobi joined #gluster
22:18 andreask joined #gluster
22:18 andreask joined #gluster
22:26 failshel_ joined #gluster
22:30 DV__ joined #gluster
22:33 ira joined #gluster
22:35 eug_ joined #gluster
22:36 eug_ hi all.  quick question: is there any kind of way to sync gluster with some back-end?  like s3?
22:38 dbruhn You can set up a gluster cluster in ec2 and use geo-replication
22:38 eug_ i'll look that up; thank you
22:38 dbruhn or you could just use an s3 bridge and do it like a normal push up to it
22:38 eug_ an s3 bridge like s3backer?
22:39 dbruhn sure
22:39 dbruhn honestly haven't touched anything like that in about 3/4 years so not really sure what's currently out there
22:40 eug_ hmmm ok
22:40 dbruhn But there are a couple guys running gluster systems on EC2, you could easily use Geo-replication for that
22:40 eug_ this is a good place to start
22:41 eug_ i did see a recent gerrit commit named "transparent encryption" somethingsomething from redhat
22:41 eug_ on a side note
22:41 eug_ is there currently no transparent encryption in gluster?
22:41 eug_ or at least, not up to this commit?
22:41 dbruhn No idea to be honest.
22:41 dbruhn If you are worried about encryption, is VPN an answer?
22:42 eug_ not for regulatory requirements
22:42 dbruhn Are you looking for encryption of the data in rest? or just in transport?
22:42 eug_ both
22:42 dbruhn assuming it's already encrypted on primary storage?
22:42 eug_ are there solutions with dm-crypt on top of gluster?  that may be a partial solution
22:42 dbruhn I haven't seen anything
22:43 eug_ hmmm ok
22:43 dbruhn How much data are you talking about?
22:43 eug_ i suppose hundreds of gigabytes to start with
22:43 dbruhn Oh so really not that much data
22:43 eug_ heh :)
22:49 chirino joined #gluster
22:49 semiosis eug_: don't be fooled!  lots of people wave their hands at the whole "just bridge to s3" thing, but afaik no one's ever really had success with that
22:50 eug_ semiosis: as a worst-case, if there's a way to copy blocks of gluster; i could pipe that to an s3 uploader
22:50 semiosis eug_: gluster uses a disk filesystem as a backend, which usually means a block device (disk/RAID/LVM) formatted with XFS
22:51 semiosis eug_: why do you want to combine gluster with s3?
22:51 semiosis what problem are you trying to solve?
22:52 eug_ running some distributed computing stuff on EC2; the object store needs to support hard links
22:52 eug_ while it's not necessary to be able to replicate to something that will continue working when the cluster dies, it would be a big plus
22:53 eug_ we have regulatory requirements so the data needs to be encrypted at rest and in transport
22:56 semiosis eug_: can you use EBS for your gluster bricks?  that would be the easy way, since you can snapshot them to S3
22:56 semiosis and restore them easily
22:56 eug_ hmm
22:56 eug_ that sounds like a definite possibility
22:56 eug_ currently was thinking of using ephemeral storage
22:56 semiosis i just did a very informal test of the new c3.large instance, which has ssd ephemeral
22:57 semiosis got about 10x the bandwidth on the local ssd as I did from an EBS vol
22:57 eug_ the EBS was non-IOPS right?
22:57 semiosis thats true
22:57 semiosis plain old ebs
22:57 eug_ hmmm
22:57 semiosis maybe the provisioned IOPS can do better, i've never played with that
22:57 eug_ it's pretty expensive
22:58 semiosis yes, too much for me
22:58 eug_ same here
22:59 semiosis generally speaking, i would recommend against mixing gluster & s3
22:59 semiosis it's so uncommon you'll have to figure out lots of stuff to get it working right
23:00 semiosis and performing well
23:00 eug_ yes; it looks like you're right
23:00 eug_ gluster on its own may be good enough for us; assuming we can get some kind of encryption on it
23:01 dbruhn Could you build your own?
23:01 eug_ ideally something transparent
23:02 eug_ i suppose we could run dm-crypt on ephemeral storage, mount gluster on that, and then somehow tunnel gluster connections via ssh
23:02 dbruhn Technically if you encrypt the data before you transport it, it's already encrypted in transit, you don't need to double up with SSH
23:02 eug_ depends on the layer of encryption
23:02 semiosis eug_: something i experimented with long ago was running gluster on ephemeral storage, then taking an image of the filesystem & saving that to S3
23:03 eug_ dbruhn: if the encryption is udnerneath gluster, then we still need to encrypt in transit
23:03 semiosis eug_: kinda hacky but if you do end up going with ephemeral storage, it might help you.  another option would be copying the ephemeral filesystem to an ebs volume & snapshotting that
23:03 eug_ semiosis: sounds painful
23:03 semiosis indeed
23:04 dbruhn No I am saying build a file level encrypt, and then maybe something like rsync to transport it to another storage.
23:04 semiosis i think gluster has on-wire encryption since 3.4
23:04 eug_ aaah.  yeah; file level encrypt ... was hoping to avoid that :D
23:05 eug_ the only mention of encryption i see is here: http://review.gluster.org/#/c/4667/
23:05 glusterbot Title: Gerrit Code Review (at review.gluster.org)
23:05 eug_ which looks too new for production
23:09 dbruhn Why are you trying to avoid file level encryption?
23:10 eug_ it would require modifying a lot of code to do so
23:10 eug_ since i don't know a transparent way to implement it
23:11 DV__ joined #gluster
23:12 dbruhn Do you need to worry about retention of data for legal purposes? Encryption is only part of that game.
23:13 semiosis i had not seen s3backer before.  this is pretty interesting!
23:15 eug_ yes; but in this case, gluster would be for processing and a final step would upload to s3 for retention
23:15 eug_ semiosis: yes, s3backer is interesting but does not allow distributed writes
23:15 eug_ nor, possibly, reads
23:15 eug_ since it's eventual consistency
23:16 semiosis what region are you in?
23:16 semiosis i think all but us-standard have read after write consistency
23:17 eug_ hmm; us-standard but could point to us-west instead
23:18 eug_ still; each node has to be able to write so that's a no-go
23:18 eug_ especially since i'm assuming the inode table is all sitting on one block
23:19 semiosis nope
23:20 semiosis well
23:20 semiosis if you were really going to go that route, you would use several s3backers, each with their own fs image
23:20 semiosis and those would be your glusterfs bricks
23:21 semiosis but this is starting to look like a game of "how complicated can we make it?"
23:22 semiosis no-go for many reasons :)
23:22 eug_ haha
23:22 eug_ yes
23:22 eug_ i mean; i could just upload the glusterfs bricks straight to s3
23:22 micu joined #gluster
23:22 eug_ if i could somehow do so in a transactional way
23:30 eug_ ok, thanks semiosis & dbruhn.  recording all i've learned for later
23:30 semiosis channel logs are in the /topic too :)
23:30 eug_ cool.
23:30 eug_ take care
23:30 dbruhn Hope you figure out something to meet your needs
23:30 eug_ getting closer :)
23:31 davidbierce joined #gluster
23:49 dbruhn semiosis, odd question.. do you name the directories containing your brick data uniformly, or uniquely?
23:49 dbruhn ie. /var/brick01 or /var/brick
23:49 dbruhn I have done it both ways, and found annoyances each way

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary