Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 elico joined #gluster
00:04 kam270_ joined #gluster
00:07 mattappe_ joined #gluster
00:09 elico1 joined #gluster
00:09 elico1 left #gluster
00:13 kam270_ joined #gluster
00:14 glusterbot New news from newglusterbugs: [Bug 1081274] clang compilation fixes and other directory restructuring <https://bugzilla.redhat.co​m/show_bug.cgi?id=1081274>
00:20 mattappe_ joined #gluster
00:22 kam270_ joined #gluster
00:24 elico joined #gluster
00:26 coredump joined #gluster
00:35 elico joined #gluster
00:40 elico joined #gluster
00:43 pk joined #gluster
00:43 theron joined #gluster
00:45 pk left #gluster
00:45 [o__o] left #gluster
00:48 [o__o] joined #gluster
00:50 fsimonce joined #gluster
00:51 [o__o] left #gluster
00:54 elico joined #gluster
00:54 bala joined #gluster
00:55 [o__o] joined #gluster
00:58 [o__o] left #gluster
00:58 elico joined #gluster
01:01 [o__o] joined #gluster
01:09 elico joined #gluster
01:09 yinyin joined #gluster
01:12 dmakovec joined #gluster
01:12 crazifyngers_ joined #gluster
01:13 elico joined #gluster
01:16 dmakovec Hi guys, hope I'm not being rude bursting in and asking a question, but I'm a newbie with a healing question.  running a 1x2 cluster.  volume heal info is constantly giving me a report of a file that needs to be healed.  I attempt to manually heal by running gluster volume heal gv0, and the command reports success, no error log output, etc.  But when I run volume heal info again the same file still appears, as shown: https://gist.githu
01:17 dmakovec shd appears to be running but the file has been sitting there in the same state for the past 2 hours.  File appears to be identical on both bricks
01:17 dmakovec any idea?
01:19 refrainblue joined #gluster
01:19 elico joined #gluster
01:19 nightwalk joined #gluster
01:19 jmarley joined #gluster
01:19 jmarley joined #gluster
01:21 elico joined #gluster
01:26 cjanbanan joined #gluster
01:26 elico joined #gluster
01:29 sputnik13 joined #gluster
01:30 sputnik1_ joined #gluster
01:33 elico joined #gluster
01:34 xavih joined #gluster
01:36 elico joined #gluster
01:38 tokik joined #gluster
01:49 lijiejun joined #gluster
01:49 primechu_ joined #gluster
01:49 haomaiwang joined #gluster
01:49 JonnyNomad joined #gluster
01:49 lyang0 joined #gluster
01:49 NuxRo joined #gluster
01:51 [o__o] joined #gluster
02:00 chirino joined #gluster
02:02 raghug joined #gluster
02:07 elico joined #gluster
02:08 eshy joined #gluster
02:08 harish_ joined #gluster
02:12 elico joined #gluster
02:16 elico joined #gluster
02:25 davinder joined #gluster
02:28 seapasulli joined #gluster
02:32 elico joined #gluster
02:34 elico joined #gluster
02:38 elico joined #gluster
02:42 elico joined #gluster
02:46 cjanbanan joined #gluster
02:47 elico joined #gluster
02:53 elico joined #gluster
03:00 theron joined #gluster
03:01 bharata-rao joined #gluster
03:04 elico joined #gluster
03:06 elico joined #gluster
03:09 elico joined #gluster
03:13 bstr joined #gluster
03:14 nightwalk joined #gluster
03:16 elico joined #gluster
03:18 elico joined #gluster
03:20 davinder joined #gluster
03:23 elico joined #gluster
03:24 shubhendu joined #gluster
03:26 elico joined #gluster
03:28 elico joined #gluster
03:30 elico joined #gluster
03:33 elico joined #gluster
03:36 hagarth joined #gluster
03:39 elico joined #gluster
03:42 elico joined #gluster
03:43 ravindran1 joined #gluster
03:44 cjanbanan joined #gluster
03:45 ravindran1 joined #gluster
03:49 itisravi joined #gluster
03:53 raghug joined #gluster
03:54 kanagaraj joined #gluster
03:55 elico joined #gluster
03:55 chirino_m joined #gluster
03:57 elico joined #gluster
04:01 saurabh joined #gluster
04:02 ndarshan joined #gluster
04:15 haomaiwa_ joined #gluster
04:15 glusterbot New news from newglusterbugs: [Bug 1081337] geo-replication create doesn't take into account ssh identity file <https://bugzilla.redhat.co​m/show_bug.cgi?id=1081337>
04:16 ravindran1 left #gluster
04:17 spandit joined #gluster
04:18 raghug joined #gluster
04:19 elico joined #gluster
04:19 haomai___ joined #gluster
04:24 elico joined #gluster
04:26 elico joined #gluster
04:27 elico joined #gluster
04:31 elico joined #gluster
04:32 elico joined #gluster
04:34 elico joined #gluster
04:35 kdhananjay joined #gluster
04:37 elico joined #gluster
04:38 cjanbanan joined #gluster
04:38 elico joined #gluster
04:40 elico joined #gluster
04:42 elico joined #gluster
04:47 sahina joined #gluster
04:50 bala joined #gluster
04:52 elico joined #gluster
04:54 dusmant joined #gluster
04:54 prasanth_ joined #gluster
04:56 elico joined #gluster
05:00 elico joined #gluster
05:01 mohankumar joined #gluster
05:01 elico joined #gluster
05:04 elico joined #gluster
05:05 elico joined #gluster
05:06 ricky-ticky joined #gluster
05:06 nishanth joined #gluster
05:07 cjanbanan joined #gluster
05:10 raghug joined #gluster
05:14 shylesh joined #gluster
05:18 deepakcs joined #gluster
05:19 vpshastry joined #gluster
05:24 seapasulli joined #gluster
05:24 ndarshan joined #gluster
05:32 RameshN joined #gluster
05:32 cjanbanan joined #gluster
05:32 kanagaraj_ joined #gluster
05:35 elico joined #gluster
05:39 elico joined #gluster
05:40 bala joined #gluster
05:40 lalatenduM joined #gluster
05:41 elico joined #gluster
05:41 seapasulli joined #gluster
05:42 elico joined #gluster
05:44 aravindavk joined #gluster
05:46 elico joined #gluster
05:47 ndarshan joined #gluster
05:49 shubhendu joined #gluster
05:50 elico joined #gluster
05:52 elico joined #gluster
05:54 benjamin_____ joined #gluster
05:54 RameshN joined #gluster
05:54 DV joined #gluster
05:54 ppai joined #gluster
05:58 elico joined #gluster
06:03 elico joined #gluster
06:04 raghu joined #gluster
06:04 sputnik1_ joined #gluster
06:05 elico joined #gluster
06:05 ndarshan joined #gluster
06:07 elico joined #gluster
06:07 kanagaraj_ joined #gluster
06:08 bala joined #gluster
06:10 kanagaraj joined #gluster
06:13 elico joined #gluster
06:15 elico joined #gluster
06:18 psharma joined #gluster
06:18 ravindran1 joined #gluster
06:20 elico joined #gluster
06:22 elico joined #gluster
06:24 elico joined #gluster
06:26 elico joined #gluster
06:27 nightwalk joined #gluster
06:28 elico joined #gluster
06:30 elico joined #gluster
06:38 DV joined #gluster
06:39 davinder joined #gluster
06:40 elico joined #gluster
06:43 tshefi joined #gluster
06:44 rahulcs joined #gluster
06:45 elico joined #gluster
06:46 glusterbot New news from newglusterbugs: [Bug 1075087] [Rebalance]:on restarting glusterd, the completed rebalance is starting again on that node <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075087>
06:48 elico joined #gluster
06:50 elico joined #gluster
06:52 kdhananjay joined #gluster
06:52 kanagaraj joined #gluster
06:53 sahina joined #gluster
06:54 nshaikh joined #gluster
06:56 elico joined #gluster
06:57 kdhananjay joined #gluster
07:00 elico joined #gluster
07:01 elico joined #gluster
07:02 davinder joined #gluster
07:06 meghanam joined #gluster
07:06 meghanam_ joined #gluster
07:07 cjanbanan joined #gluster
07:08 vimal joined #gluster
07:10 dusmant joined #gluster
07:10 elico joined #gluster
07:11 slayer192 joined #gluster
07:12 kdhananjay joined #gluster
07:15 elico joined #gluster
07:16 kdhananjay joined #gluster
07:18 elico joined #gluster
07:25 elico joined #gluster
07:28 deepakcs @mailinglists
07:28 glusterbot deepakcs: http://www.gluster.org/interact/mailinglists
07:28 deepakcs @sambavfs
07:28 glusterbot deepakcs: http://lalatendumohanty.wordpress.com/2014​/02/11/using-glusterfs-with-samba-and-samb​a-vfs-plugin-for-glusterfs-on-fedora-20/
07:28 deepakcs @faq
07:29 deepakcs @sambavfs | deepakcs
07:29 cjanbanan joined #gluster
07:32 XATRIX joined #gluster
07:32 kanagaraj joined #gluster
07:33 sahina joined #gluster
07:34 deepakcs @deepakcs
07:35 ekuric joined #gluster
07:36 elico joined #gluster
07:40 elico joined #gluster
07:46 elico joined #gluster
07:49 elico joined #gluster
07:54 ctria joined #gluster
07:55 Pavid7 joined #gluster
07:57 elico joined #gluster
07:59 elico joined #gluster
08:00 dusmant joined #gluster
08:01 elico joined #gluster
08:04 elico joined #gluster
08:06 elico joined #gluster
08:06 sputnik1_ joined #gluster
08:07 eseyman joined #gluster
08:08 pkliczew joined #gluster
08:10 keytab joined #gluster
08:10 elico joined #gluster
08:12 ngoswami joined #gluster
08:13 davinder joined #gluster
08:18 cjanbanan joined #gluster
08:19 elico joined #gluster
08:29 Soccerray joined #gluster
08:29 rgustafs joined #gluster
08:30 lalatenduM glusterbot, help
08:30 glusterbot lalatenduM: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin. You may also want to use the 'list' command to list all available plugins and commands.
08:30 lalatenduM glusterbot, list
08:30 glusterbot lalatenduM: Admin, Alias, Anonymous, Bugzilla, Channel, ChannelStats, Conditional, Config, Dict, Factoids, Google, Herald, Later, MessageParser, Misc, Network, NickCapture, Note, Owner, Plugin, RSS, Reply, Seen, Services, String, Topic, URL, User, Utilities, and Web
08:38 elico joined #gluster
08:40 Pavid7 joined #gluster
08:40 elico joined #gluster
08:40 samppah @facts
08:40 samppah glusterbot, facts
08:40 samppah =(
08:44 shylesh joined #gluster
08:48 elico joined #gluster
08:49 andreask joined #gluster
08:52 elico joined #gluster
08:56 elico joined #gluster
08:58 ndarshan joined #gluster
08:58 kanagaraj joined #gluster
08:59 warci joined #gluster
08:59 dusmant joined #gluster
09:04 elico joined #gluster
09:04 X3NQ joined #gluster
09:05 elico joined #gluster
09:07 tryggvil joined #gluster
09:09 elico joined #gluster
09:11 elico joined #gluster
09:12 elico joined #gluster
09:14 elico joined #gluster
09:18 hagarth @factoid
09:18 hagarth @factoids
09:18 hagarth samppah: I give up too :(
09:18 samppah :(
09:19 samppah @meh
09:19 glusterbot samppah: I'm not happy about it either
09:24 elico joined #gluster
09:29 hybrid512 joined #gluster
09:29 RameshN joined #gluster
09:30 hybrid512 joined #gluster
09:35 jtux joined #gluster
09:37 keytab joined #gluster
09:40 elico joined #gluster
09:44 elico joined #gluster
09:45 hagarth joined #gluster
09:46 dusmant joined #gluster
09:47 RameshN joined #gluster
09:50 davinder joined #gluster
09:50 elico joined #gluster
09:52 kanagaraj joined #gluster
09:52 elico joined #gluster
09:55 elico joined #gluster
09:57 Pavid7 joined #gluster
09:59 elico joined #gluster
10:01 elico joined #gluster
10:01 chirino joined #gluster
10:03 elico joined #gluster
10:05 elico joined #gluster
10:07 nightwalk joined #gluster
10:15 jbustos joined #gluster
10:15 elico joined #gluster
10:17 elico joined #gluster
10:19 rgustafs joined #gluster
10:27 elico joined #gluster
10:27 raghug joined #gluster
10:29 elico joined #gluster
10:30 ravindran2 joined #gluster
10:31 elico joined #gluster
10:34 kanagaraj joined #gluster
10:36 elico joined #gluster
10:39 theron joined #gluster
10:39 dusmant joined #gluster
10:41 siel joined #gluster
10:42 elico joined #gluster
10:47 elico joined #gluster
10:49 elico joined #gluster
10:49 ravindran1 joined #gluster
10:49 mohankumar joined #gluster
10:50 elico joined #gluster
10:52 elico joined #gluster
10:55 elico joined #gluster
11:00 elico joined #gluster
11:03 elico joined #gluster
11:03 elico left #gluster
11:06 elico joined #gluster
11:08 andreask joined #gluster
11:09 diegows joined #gluster
11:09 elico joined #gluster
11:11 elico joined #gluster
11:14 elico joined #gluster
11:16 elico joined #gluster
11:18 elico joined #gluster
11:20 elico joined #gluster
11:22 elico joined #gluster
11:23 elico joined #gluster
11:24 kkeithley1 joined #gluster
11:25 elico joined #gluster
11:27 elico joined #gluster
11:29 elico joined #gluster
11:32 elico joined #gluster
11:34 elico joined #gluster
11:34 liquidat joined #gluster
11:39 bala joined #gluster
11:39 ngoswami joined #gluster
11:47 dusmant joined #gluster
11:50 elico joined #gluster
11:52 sahina joined #gluster
11:52 elico joined #gluster
11:52 ndarshan joined #gluster
11:52 kanagaraj joined #gluster
11:54 RameshN joined #gluster
11:54 elico joined #gluster
11:56 elico joined #gluster
11:59 ppai joined #gluster
12:01 elico joined #gluster
12:04 elico joined #gluster
12:07 itisravi joined #gluster
12:09 elico joined #gluster
12:12 elico joined #gluster
12:17 elico joined #gluster
12:19 elico joined #gluster
12:20 elico joined #gluster
12:21 ppai joined #gluster
12:23 kam270_ joined #gluster
12:24 raghug joined #gluster
12:24 kanagaraj joined #gluster
12:24 ndarshan joined #gluster
12:26 B21956 joined #gluster
12:29 elico joined #gluster
12:30 elico joined #gluster
12:31 mattappe_ joined #gluster
12:33 kam270_ They call it a conspiracy but we have proven it is true. How Jews are using social pathologies to take over the world : http://www.scribd.com/doc/98146955/Psychol​ogical-warfare-in-politics-Neo-Conservatis​m-s-failed-attempts-to-sabotage-humanism
12:33 glusterbot Title: Psychological warfare in politics. - Neo-Conservatism's failed attempts to sabotage humanism. (at www.scribd.com)
12:33 kam270_ left #gluster
12:34 elico joined #gluster
12:36 elico joined #gluster
12:38 elico joined #gluster
12:40 elico joined #gluster
12:42 elico joined #gluster
12:44 aravindavk joined #gluster
12:45 elico joined #gluster
12:47 elico joined #gluster
12:49 elico joined #gluster
12:50 17WAAB8DC joined #gluster
12:53 elico joined #gluster
12:53 dusmant joined #gluster
12:54 sprachgenerator joined #gluster
12:54 jtux joined #gluster
12:55 theron joined #gluster
12:57 hagarth joined #gluster
12:59 coredump joined #gluster
13:04 JPeezy joined #gluster
13:05 elico joined #gluster
13:06 kkeithley1 joined #gluster
13:07 RameshN joined #gluster
13:12 coredump joined #gluster
13:14 japuzzo joined #gluster
13:15 ppai joined #gluster
13:16 JPeezy joined #gluster
13:18 benjamin_____ joined #gluster
13:29 robo joined #gluster
13:29 nightwalk joined #gluster
13:29 ngoswami joined #gluster
13:30 jtux joined #gluster
13:36 robo joined #gluster
13:46 hagarth joined #gluster
13:49 seapasulli joined #gluster
13:51 jmarley joined #gluster
13:57 kaptk2 joined #gluster
14:01 theron joined #gluster
14:03 X3NQ joined #gluster
14:04 diegows joined #gluster
14:05 chirino_m joined #gluster
14:08 ctria joined #gluster
14:10 seapasulli joined #gluster
14:11 rpowell joined #gluster
14:17 coredump_ joined #gluster
14:17 dbruhn joined #gluster
14:19 mohankumar joined #gluster
14:25 ninthBit joined #gluster
14:27 harish_ joined #gluster
14:30 jag3773 joined #gluster
14:32 ninthBit hi, i would like to setup a cron job that will execute the gluster command to output status information to log files.  The thing is my gluster command only works when running as sudo but does not output anything or error message when running in root's crontab.  The command is "gluster --log-file=/dev/null peer status"  that output is being written to a log file.  When I run the command at the command line sudo gluster....  i can get out
14:33 ninthBit continued... if i run the gluster command without sudo i get an error "connection failed.  please check if gluster daemon is operational" which is why i have to run it with sudo
14:34 ninthBit my cron job is running but the gluster command isn't generating any output .. my script is reading from standard in and works when i run it from the console but again not under cron
14:35 ninthBit which leads me to ask if there is a trick to get the gluster command to work when running as a cron job.
14:36 dbruhn ninthBit, have you tried running it in a script outside of cron?
14:36 ninthBit dbruhn, yes. it works correctly when i run it from the command line.
14:36 dbruhn can you just launch your script from cron?
14:37 lmickh joined #gluster
14:37 dbruhn what distro?
14:38 ninthBit dbruhn, yes, i am running my script from cron.  the script is basically "gluster --log-file=/dev/null peer status | while read .... echo into file"
14:38 ninthBit dbruhn, the distro is Ubuntu 12.04 server.  currently running gluster version 3.4.2
14:39 dbruhn you've enabled the root user?
14:40 ninthBit dbruhn, i have a "touch" that is creating a file initially and the cron is setup with other scripts that are working. would that still be a possible problem?
14:41 dbruhn under the root users crontab?
14:41 ninthBit dbruhn, yes, i use sudo crontab -e to edit the crontab
14:42 ninthBit dbruhn, i didn't see any environment variables that would be different from my user and root for running in cron but i should double check
14:42 dbruhn ninthBit, each user has their own crontab so log in as root and then edit crontab with -e and see if it works
14:43 rpowell left #gluster
14:43 jobewan joined #gluster
14:45 ndk joined #gluster
14:45 ninthBit dbruhn, yes, from my understanding the "sudo crontab -e" will edit the root's user's crontab instead of the user's crontab.  if in my account i execute "contab -l" i do not see the job.  thank you for taking time to help me too (B)
14:46 dbruhn ninthBit, sorry I haven't been of more help.
14:47 dbruhn Whats the ultimate goal you are trying to accomplish? monitoring?
14:48 ninthBit keep tabs on the status of gluster operation. i have not had the time to find other tools to install and monitor gluster and figured it would be easy to output a few of the gluster info and status commands to files on the network to assist in an initial monitoring... low low tech
14:48 dbruhn Do you have an existing monitoring solution?
14:49 ninthBit dbruhn, for linux and glusterfs no... we are new to the game and not up on monitoring solutions honestly
14:50 vpshastry joined #gluster
14:50 ninthBit dbruhn, getting there... i have on the todo list to use chef or other devops based tools....
14:50 dbruhn Totally understand, we've all been at the cusp of learning and having to glue it all together.
14:50 dbruhn Tried and true monitoring tools for stuff being up and notifications, nagios
14:50 dbruhn cacti for usage statistics
14:51 dbruhn a lot of guys like zabbix
14:51 ninthBit i know there is splunk but it is rather clunky at times
14:51 ninthBit thanks, i'll check those out
14:52 jobewan joined #gluster
14:52 Pavid7 joined #gluster
14:52 ninthBit let me see if i can create a simple example of the script i am wanting to run under cron and see if it can be replicated
14:56 sprachgenerator joined #gluster
14:57 Copez joined #gluster
14:57 Copez Hello all
14:57 dbruhn Hello
14:57 glusterbot dbruhn: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:57 Copez Still having perfomance issues with a 3-node Gluster setup
14:58 Copez Dell R515 / 8x 3TB RAID10 / 32GB RAM / ZFS + compression + tuning (SLOG SSD Cache, xattr=sa)
14:58 dbruhn Copez, what's some of the detail of the issue, and what have you tried to do to resolve it?
14:59 Copez Network = 2x 10 GBe per node with B-ALB
15:00 Copez With a DD of 80 GB / 1MB blocksize I can achive 1300 MB/s directly on the server
15:01 Copez with a Bonnie++, double RAM and 80 GB file, I can achieve 290 / 360 MB/s directly on the server
15:02 Copez If i create a 3x replicated or 2x replicated or 3x striped volume the performance will not come above 130 MB/s
15:03 Copez Gluster Volume tweaking doesn't resolve a thing, maybe a few MB/s profit but nothing seriously
15:03 dbruhn Ok, so your gluster client is running remotely from your gluster servers?
15:03 Copez Yes, I've used Proxmox and Ovirt
15:03 hagarth joined #gluster
15:03 dbruhn and you are getting 360MB a sec across the network without gluster in the picture?
15:03 Copez basicly a seperate Virtualization cluster and a seperate storage cluster
15:04 dbruhn understood
15:04 Copez correct
15:04 dbruhn assuming this is still in test?
15:04 Copez Yes
15:04 Copez Bonnie does more random IO as you maybe will now
15:04 Copez :)
15:05 Copez the Gluster profiler shows me that most of the time it has to set XATTRS and WRITES
15:05 dbruhn Are you using NFS or the gluster fuse client?
15:05 Copez Gluster Fuse client
15:05 Copez Also the Virt. nodes are connected with 10 GBe to the Storage / Gluster Nodes
15:05 dbruhn ok, one thing to note is every replication you add, becomes a multiple of how many transfers your network needs to handle.
15:06 Copez I'm aware of that, but when I create a striped volume the speed should incease, but it doesn't
15:06 dbruhn Gluster stripping is slow
15:06 Copez :s
15:06 Copez darn
15:06 Copez :)
15:06 dbruhn lol sec
15:06 Copez Other options?
15:07 chirino joined #gluster
15:07 dbruhn This is a good read on the subject
15:07 dbruhn http://joejulian.name/blog/sho​uld-i-use-stripe-on-glusterfs/
15:07 glusterbot Title: Should I use Stripe on GlusterFS? (at joejulian.name)
15:07 Copez Well I would like to use a 3x repliate...
15:07 Copez The stripe was just for testing
15:07 dbruhn I am in a performance bound environment as well, what I have done is engineered my brick storage to produce the amount of IO I need and used DHT+rep
15:08 dbruhn what is your disk subsystem?
15:08 dbruhn I am really having a hard time wrapping my head around 360MB a sec on 10GB
15:08 Copez Nothing, just the default H200 controller, no config...
15:09 dbruhn how many disks?
15:09 Copez Then a RAID10 mirror based on ZFS
15:09 Copez 8 disks
15:09 dbruhn brb
15:09 Copez k
15:10 dusmant joined #gluster
15:12 ctria joined #gluster
15:16 ninthBit this is the script that i have put together to test.  http://pastebin.com/7mK5s92i  when i run it in root's cron i get zero output.  an empty file due to the touch.  If i run the script as my user but with "sudo .\glusterpeer.sh"  i get the expected output.
15:16 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:19 theron joined #gluster
15:20 benjamin_____ joined #gluster
15:20 kkeithley1 joined #gluster
15:20 nixpanic joined #gluster
15:21 nixpanic joined #gluster
15:22 vpshastry joined #gluster
15:25 dbruhn back
15:26 dbruhn Copez, what's the output of your volume info?
15:29 Copez http://ur1.ca/gxf4i
15:29 glusterbot Title: #89211 Fedora Project Pastebin (at ur1.ca)
15:30 Copez GFS-01 = 3x Replicate
15:30 Copez GFS-02 = 2x Replicate
15:30 Copez GFS-03 = Striped over 3 Nodes
15:33 dbruhn 8 drives per brick server?
15:33 Copez Yes
15:35 Copez 8x 3TB ST3000NM0033
15:35 dbruhn 7200RPM sata?
15:35 Copez Yes
15:39 dbruhn Assuming you are only using bonds for redundancy sake?
15:40 Copez Correct
15:40 Copez Well B-ALB shoud also distribute the load a little bit...
15:41 Copez but he main reason is HA / redundancy
15:41 dbruhn yeah, but it looks like you are maxing out at about 2.8gb a sec
15:41 dbruhn which I am assuming is a disk related deal
15:41 Copez Correct ;)
15:41 Copez With iperf i can blow the whole 10GBE to the roof :0
15:41 Copez With 8 threads it goes like a charm...
15:42 Copez network isn'tn the bottleneck I think
15:42 dbruhn lol, I have some SSD servers that can destroy 10GB ;)
15:42 dbruhn I agree
15:42 dbruhn every time you add a replica does the performance decrease in an expected fashion?
15:42 Copez Yes I'm aware of that but most of the time seems to be in setting the XATTR's and WRITES
15:42 Copez A little
15:43 Copez I'm getting 130 MB/s with a 3 replica
15:43 Copez I'm getting maybe 160 with a 2 replica
15:43 dbruhn small IOPS are always hard, latency is the killer there.
15:43 Copez Just copying a file of 10 GB (HDD Image) from one server to another
15:43 dbruhn and on a write, the client has to wait for a response from all the servers it's writing too
15:44 dbruhn have you tried to run just a DHT (distributed) volume just as a test
15:44 dbruhn to see what no replication looks like for you?
15:44 Copez The strange thing is that I testesd gluster on old hardware with a single 1 GB nic
15:44 Copez also with one HDD for brick
15:45 Copez so 3 old nodes, connected through 1 GB with 8 GB RAM, 3 replica's provided me the full 100 MB/s
15:45 dbruhn You know what... try this, take your bond out of the picture, go to a single NIC, and see if that gets better
15:45 Copez So I expected that the performance would go crazy if i spend 25k on new servers
15:46 dbruhn Maybe you have having an issue with multipathing being noisy on the network
15:46 Copez how to create a DHT?
15:46 dbruhn replica 1
15:46 jag3773 joined #gluster
15:46 Copez with 2 bricks?
15:46 dbruhn DHT stands for distributed hash table
15:46 dbruhn you can use 3 bricks if you want
15:47 dbruhn one thing to keep in mind is that the files only going to be read from the brick it's located on
15:47 robo joined #gluster
15:48 dbruhn but with DHT you will see single server to server speeds without gluster, it will narrow down what you are trying to trouble shoot
15:48 Copez "replica count should be greater then 1"
15:48 dbruhn do that and then take the bond out of the picture and see if your performance improves or decreases with a single nic
15:49 Copez okay, disconnected the bond...
15:49 dbruhn I would say disable the bond completely and go straight network, just in case
15:49 sputnik1_ joined #gluster
15:49 Copez I meant "removed" the bond
15:50 Copez I'm unable to create a replica 1
15:50 dbruhn gluster volume create NEW-VOLNAME
15:51 dbruhn that will create a normal distributed volume
15:51 Copez gluster volume create GFS-04 replica 1 transport tcp gluster-backend-01:/aggr0/GFS-04 gluster-backend-02:/aggr0/GFS-04 gluster-backend-03:/aggr0/GFS-04
15:51 seapasulli joined #gluster
15:51 Copez replica count should be greater than 1
15:51 dbruhn then just take replica out of the command
15:52 Copez okay, now it is a distribute :)
15:52 dbruhn ok, now run your test, and then run your test with your bond set back up
15:55 Copez Speed has improved with a 100 MB/s
15:55 Copez 70 MB/s to be exactly
15:56 dbruhn from no bond to bond, or just from what you were doing?
15:56 Copez This is with the bond activated
15:56 Copez I'm running the test again without the bond
15:57 Copez No serious improvement or degradation without the bond
15:57 Pavid7 joined #gluster
15:58 dbruhn well that's taken out of the picture at least
15:58 Copez :)
15:58 zerick joined #gluster
15:58 Copez what if I shutdown 2 nodes and then run the test again
15:59 dbruhn you could create a volume with just a single brick and test if you want
16:00 robo joined #gluster
16:00 ninthBit ok, i had suspected my problem was due to env differences.  i copied the path env variable of my user into my script and that fixed the cron job. i am getting output now...
16:02 Copez with one brick I hit 140 MB/s more then with a 3 brick setup
16:03 Copez sidetrack, what do you think of CEPH ?
16:03 dbruhn Copez, one thing to keep in mind a distributed file system is purpose built for single thread performance, it's built to be able to provide performance to multiple clients in an even manner
16:03 dbruhn Honestly I haven't used CEPH, I've heard good things about it. In my space it's not really what I needed.
16:04 Copez Well I wanted an susbstitute for our (Compellent) SAN and thought Gluster would be a nice sustitute
16:04 Copez We use the SAN only for the VM's
16:05 Copez Did I made a wrong decision to asume that Gluster would be a nice substitue
16:06 dbruhn No a lot of people use it
16:06 dbruhn for exactly what you are talking about
16:06 dbruhn how many vm hosts are you using in front of it?
16:06 Copez Well right now I'm testing with 5 Win2008 VM's and 5 CentOS VM's
16:07 dbruhn and are you trying to better your performance over your SAN or do same for same?
16:07 dbruhn Copez, how many VM hosts, not guest?
16:07 Copez Production is 40x Linux and 20x Windows
16:07 Copez I'm testing with 2 nodes
16:07 Copez Production will be 4 nodes
16:08 cjanbanan left #gluster
16:08 dbruhn What doesn't your Compellent do for you that makes you want to replace it?
16:08 haomaiwang joined #gluster
16:09 Copez It's slow / lot of broken disks and costs arround 8k Euro a year for maintanance
16:09 Copez The 3 VM nodes are running out of memory so they will be replaced with a 4 node setup
16:09 dbruhn Fair
16:10 dbruhn does gluster perform better than the SAN?
16:10 Copez The compellent has only 2 active storage path of 1 GB
16:10 Copez At this moment not realy :)
16:10 dbruhn How many disks does the compel lent have?
16:11 ekuric1 joined #gluster
16:11 Copez 2 controllers
16:12 Copez 12x 600 GB SAS
16:12 Copez 8x 1TB SATA
16:13 Copez Sorry = 16x 600 GB SAS
16:13 Copez 10x 1 TB SATA
16:14 Copez Sorry again = 16x 300 GB SAS 15k
16:14 Mo_ joined #gluster
16:14 Copez :=)
16:14 dbruhn Ok, well I will say this, Gluster is not magic. Your disk sub systems will need to be purpose built for the performance you want from them.
16:15 dbruhn In my "faster" system I am running 24 bricks that are 6 drive 15k SAS raid 5's
16:15 ekuric joined #gluster
16:15 Copez Why so many bricks?
16:16 dbruhn lots of data, and more manageable chunks of data.
16:16 Copez Based on 6 drives of 600 GB?
16:16 dbruhn yep
16:16 Copez With Hardware R5 ?
16:17 dbruhn I have other systems that are not as performance oriented that have larger bricks.
16:17 dbruhn yep
16:17 dbruhn 12 servers
16:17 dbruhn 2 bricks per server
16:17 Copez MB/s ?
16:18 dbruhn I've never tested for throughput, it's not my concern with any of my systems
16:18 dbruhn I just raw IOPS
16:18 dbruhn s/just/need/
16:18 glusterbot What dbruhn meant to say was: I need raw IOPS
16:19 Copez clear
16:19 Copez how many iops then?
16:19 Copez :)
16:19 gmcwhistler joined #gluster
16:19 dbruhn 700 for every 1TB of data I house on the system.
16:20 monotek "gluster volume heal storage info" gives me some entries with 1 and just points to "/" instead of an file or file id... how to fix?
16:20 dbruhn monotek, are they showing up in the split-brain output?
16:20 Copez Well, this is a nice time to shutdown, go home and think about it.
16:21 Copez Thanks for helping, so far
16:21 dbruhn Hope I gave you some direction that was useful.
16:21 dbruhn I always say, build from the bottom with Gluster. Disk Subsystems, Servers, Network. Everything needs to support from the bottom what you need on the top.
16:21 monotek dbruhn yes
16:22 monotek but how to heal an empty volume?
16:22 dbruhn monotek, check the root of all of the brick file systems and make sure the directories are the same inside each brick, just at the root level
16:31 tryggvil joined #gluster
16:31 Pavid7 joined #gluster
16:33 raghug joined #gluster
16:36 nightwalk joined #gluster
16:37 monotek dbruhn rights of brick dirs seem ok. i also tried to delete al data from fuse mount an read but the problem with the root directory remains...
16:38 monotek read = readded
16:38 dbruhn monotek, in the bricks there is a .glusterfs directory
16:38 monotek yes
16:38 abyss^ joined #gluster
16:38 monotek do i have to compare these?
16:38 dbruhn check .glusterfs/00/00
16:38 dbruhn sec looking at something
16:39 dbruhn check and make sure the file 00000000-0000-0000-0000-000000000001 is a soft link to ../../..
16:39 dbruhn on each brick
16:40 monotek yes 00000000-0000-0000-0000-000000000001 -> ../../../ its there....
16:40 monotek on all 4 nodes
16:45 robo joined #gluster
16:46 dbruhn hmm
16:48 glusterbot New news from newglusterbugs: [Bug 1081605] glusterd should not create zombies if xfsprogs or e2fsprogs are missing <https://bugzilla.redhat.co​m/show_bug.cgi?id=1081605>
17:06 ninthBit i have not figured it out yet... but the command gluster volume heal v info heal-failed is not output under cron .... but my other status and info commands are
17:11 ninthBit never mind i figured it out
17:15 raghug joined #gluster
17:24 hagarth joined #gluster
17:25 bene joined #gluster
17:25 zaitcev joined #gluster
17:26 vpshastry1 joined #gluster
17:27 JPeezy_ joined #gluster
17:28 Matthaeus joined #gluster
17:29 lpabon joined #gluster
17:34 bennyturns joined #gluster
17:35 theron joined #gluster
17:37 coredum__ joined #gluster
17:38 glustercjb1 joined #gluster
17:41 glustercjb1 hey there joe
17:41 glustercjb1 so, I went to open the ticket we were speaking of yesterday, and found this one that was already opened
17:41 glustercjb1 https://bugzilla.redhat.co​m/show_bug.cgi?id=1077452
17:41 glusterbot Bug 1077452: unspecified, high, ---, vshankar, NEW , Unable to setup/use non-root Geo-replication
17:42 glustercjb1 I also opened the ticket for the lack of "-i" in the gverify.sh script
17:42 glustercjb1 https://bugzilla.redhat.co​m/show_bug.cgi?id=1081337
17:42 glusterbot Bug 1081337: high, unspecified, ---, csaba, NEW , geo-replication create doesn't take into account ssh identity file
17:46 vulcan joined #gluster
17:47 monotek mounting new created volume via nfs fails. old ones are working. fuse mount works to...
17:47 monotek mount -t nfs storage3.local:/volume /mnt/
17:47 monotek mount.nfs: mounting storage3.local:/volume failed, reason given by server: No such file or directory
17:47 monotek any hints?
17:48 vulcan Hello everyone, having a bit of a problem with a working gluster box. I have a running nfs mount that is on one server and working fine, however I am trying to mount that mount on another server and it then hangs then fails. Is the gluster nfs limited to only one server mounting it at a time? Is there anything I can set in gluster itself?
17:49 monotek nfs mount works on multiple server at the same time...
17:50 robo joined #gluster
17:50 vulcan yeah, that is what I thought, then I am a bit stumped. I can no longer mount the volume, and it is mounted and running fine on one server
17:51 vulcan monotek, did you try -o vers=3 ?
17:51 monotek no, dont need it on my other volumes....
17:51 monotek which work
17:56 vulcan When trying to mount it even via the gluster client I am also getting failed to fetch volume file.
17:57 jblack joined #gluster
17:57 jblack Hi there!
17:58 jblack I'm looking for a nice chef cookbook for gluster. I've found a few floating around on github in various states. Is there one in particular the community has gatehred around?
17:58 glustercjb1 haha, +1
18:00 jag3773 joined #gluster
18:03 jblack https://github.com/amccloud/glusterfs-cookbook looks not too bad, but I'd have to fix it up to work with my platform.
18:03 glusterbot Title: amccloud/glusterfs-cookbook · GitHub (at github.com)
18:04 glustercjb1 that is super simple, compared to: https://github.com/purpleidea/puppet-gluster
18:04 glusterbot Title: purpleidea/puppet-gluster · GitHub (at github.com)
18:04 glustercjb1 we are not running puppet here though (boo), so I have to do something with Chef (booboo)
18:05 purpleidea glustercjb1: jblack o hi
18:05 chirino joined #gluster
18:06 purpleidea jblack: basically afaict, the defacto configuration management tool for glusterfs is this one in ,,(puppet) but there is nothing stopping you from using a different config management tool, in fact, reading that module might even help you write one
18:06 glusterbot jblack: https://github.com/purpleidea/puppet-gluster
18:06 jblack You're a puppet man, I take it?  Our shop is entirely chef, which I'm pretty good with. I haven't touched puppet, but we have a newer hire that -loves- puppet.
18:07 purpleidea jblack: they all suck for different reasons. for my use cases, at the moment, puppet sucks the least :)
18:07 * jblack nods
18:08 glustercjb1 IMHO puppet let's you shoot yourself in the foot in fewer ways than chef
18:08 purpleidea jblack: you don't need to know very much puppet to use my code -- also you might want to use it just to deploy/re-deploy a dozen different configs for testing...
18:08 purpleidea glustercjb1: exactly, which matters at scale
18:08 glustercjb1 developers love chef because you can shoot yourself in the foot many many ways
18:08 jblack I'm reading through it your branch now
18:09 purpleidea jblack: there is documentation, and also a number of associated articles at: (plug) https://ttboj.wordpress.com/
18:09 glustercjb1 hah, I just realized that you are the puppet guy
18:09 purpleidea jblack: https://github.com/purpleidea/puppet-​gluster/blob/master/DOCUMENTATION.md
18:09 glusterbot Title: puppet-gluster/DOCUMENTATION.md at master · purpleidea/puppet-gluster · GitHub (at github.com)
18:09 glustercjb1 purpleidea: ^---
18:09 purpleidea glustercjb1: "the puppet guy" ?
18:09 rotbeard joined #gluster
18:09 glustercjb1 who wrote that module
18:09 purpleidea indeed
18:09 jblack Oh oh. NOw you've been title, branded and marked
18:10 glustercjb1 er, manifest
18:10 glustercjb1 haha
18:10 purpleidea module is correct
18:10 jblack s/title/titled;s/NOw/Now
18:10 purpleidea glustercjb1, jblack: well, feel free to try it out, and let me know if you have issues. it integrates with ,,(vagrant)
18:10 glusterbot jblack: (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @
18:10 glusterbot https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/, or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
18:10 glusterbot (#1) Part 1 @ https://ttboj.wordpress.com/2013/12​/09/vagrant-on-fedora-with-libvirt/, or (#2) Part 2 @ https://ttboj.wordpress.com/2013/12​/21/vagrant-vsftp-and-other-tricks/, or (#3) Part 3 @ https://ttboj.wordpress.com/2014/01/​02/vagrant-clustered-ssh-and-screen/, or (#4) Part 4 @ https://ttboj.wordpress.com/2014/0​1/08/automatically-deploying-glust​erfs-with-puppet-gluster-vagrant/,
18:10 glusterbot or (#5) https://ttboj.wordpress.com/2014/01/16​/testing-glusterfs-during-glusterfest/
18:11 semiosis whoa
18:11 purpleidea semiosis: yeah, we should kick glusterbot for pasting in channel.
18:11 semiosis 4rls
18:12 jblack I have a simple initial use case.  We have a centralized server here at the mothership, and a localized server at each of our facilities. Each localized server provides a few dozen short lived clients with access to firmware and other goodies for self-provisioning.
18:12 nixpanic joined #gluster
18:13 nixpanic joined #gluster
18:13 jblack Well, that's the plan. Right now, we just have the mothership and the clients, and all the configuration happens at the mothership. =)
18:14 jblack so, we could go with something as simple as a nightly rsync from each of the local servers to the central server, or we could use s3, or we could... lots of choices.  I was thinkng this is a good time to build up some gluster muscles
18:16 jblack So, something along the lines of a  [non redundant gluster server] ->  [initially two gluster clients] -> [ short lived server farms that mount .. something.. nfs? gluster, not sure]
18:16 jblack Any thoughts?
18:16 chirino joined #gluster
18:19 purpleidea thinking...
18:19 purpleidea jblack: it's not really clear what you want to do. i'd recommend writing it all up neatly on the gluster-users mailing list. but not before you've actually TRIED glusterfs once.
18:21 vpshastry1 joined #gluster
18:22 jblack I want to distribute about 10 gigabytes of data of firmware data  from a central office to  two or more local offices, so that temporary clients can access the firmware data for things like firmware flashing and whatnot.
18:23 jblack historically, this sort of problem would be solved with   [central-office] <---periodic rsync --- [a local office] <---- nfs----   [clients]
18:23 vpshastry1 left #gluster
18:24 mattappe_ joined #gluster
18:25 jrcresawn joined #gluster
18:26 jblack Gluster looks like a nice potential replacement for periodic rsync (if I can get openvpn to assign static ips to the new local office servers).  I'm wondering if we makes sense to replace nfs with gluster (I heard that gluster can act as a nfs server?)
18:26 glustercjb1 gluster has nfs built in "for free"
18:26 glustercjb1 I should say, the nfsd
18:27 glustercjb1 IP failover doesn't happen though, that's once piece that the gluster FUSE mount would give you automagically
18:27 jblack great. so to the clients, it's just nfs. That makes it easy.
18:27 jblack We're not deeply concerned about HA or redundancy in this instance.
18:28 purpleidea jblack: do you need 1 way sync or 2 way?
18:29 jblack one way
18:29 glustercjb1 geo-replication!
18:29 jblack ohhh. boss'es boss is buying free thai. Gtg
18:29 purpleidea for 10G you might want to look at btrfs plus sending diffs... it's a built in feature... pretty cool... not necessarily production bombproof yet though
18:29 jblack we're guaranteeing 2 nines. =)
18:30 jblack ohhh
18:30 jblack ok. bb in about an hour.
18:41 JoseBravo joined #gluster
18:41 JoseBravo Hi
18:41 glusterbot JoseBravo: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
18:42 JoseBravo Is hardware raid recommended in glusterfs even if I'll repliate data in another server'
18:42 JoseBravo Is hardware raid recommended in glusterfs even if I'll repliate data in another server?
18:43 purpleidea JoseBravo: yes
18:43 purpleidea RAID 6
18:43 * purpleidea afk
18:46 JoseBravo What so important is RAM and CPU in glusterfs? Servers with 8GB of ram and quad core 2.5Ghz can work good?
18:48 dbruhn JoseBravo, ram and processing are important depending on what functions the system is running. Most importantly during maintenance operations.
18:48 seapasulli joined #gluster
18:51 monotek mounting new created volume via nfs fails. old ones are working. fuse mount works to...
18:51 monotek mount -t nfs storage3.local:/volume /mnt/
18:51 monotek mount.nfs: mounting storage3.local:/volume failed, reason given by server: No such file or directory
18:51 monotek any hints?
18:51 JoseBravo dbruh, those servers with 8Gb, Quad Core 2.5Ghz are too small for a production eviroment ?
18:52 dbruhn JoseBravo, the processing is probably enough, Redhat suggests 32GB for production servers, I have a bunch running 16GB that seem to be fine, but have had issues during rebalance operations.
18:52 dbruhn It really depends on a lot of things
18:53 dbruhn How many clients, what kind of performance, what kind of data, etc.
18:53 dbruhn monotek, what does df -h show on the server you are trying to mount it to?
18:54 dbruhn and whats the output of showmount -e (server address)
18:55 monotek df -h is ok. enough space evrywhere... the list is missing my new volumes too....
18:55 monotek the showmount list
18:56 dbruhn from the df I was actually looking to see if you had something mounted @ /mnt or maybe in conflict already
18:56 dbruhn if you do the show mount -e 127.0.0.1 on the server that is providing the share what do you see?
18:58 monotek same thing. list is missing new volumes (which i just recreated after my split brain issue)
18:59 monotek also no mount conflicts...
18:59 JoseBravo dbruhn, thanks!
18:59 JoseBravo last question, is possible to do caching in SSD hard drives?
19:00 monotek josebravo imho only with dm-cache or bcahe at the moment....
19:01 JoseBravo ok
19:01 dbruhn JoseBravo, +1 to monotek
19:02 mattappe_ joined #gluster
19:02 dbruhn monotek, are any of your volumes showing up via NFS?
19:02 JoseBravo Any recommendation for the switch? need to has some special feature?
19:03 dbruhn only if you need special functions, gluster on a software level is agnostic to the network
19:03 dbruhn except if you are choosing between RDMA or TCP/IP
19:03 monotek dbruhn - "gluster volume status" has all "Y"
19:03 dbruhn but thats a rathole you don't want to go down
19:03 monotek my old volumes work via nfs too
19:04 dbruhn are your old volumes on the same hardware?
19:04 monotek yes
19:04 JoseBravo thanks dbruhn and monotek
19:04 dbruhn and none of the volumes are showing up in the export list?
19:04 monotek the new volumes also work with fuse mount
19:04 monotek yes, none :-(
19:05 dbruhn have you tried remounting one of the existing (old) NFS mounts?
19:08 monotek yes. works...
19:08 dbruhn weird
19:08 dbruhn anything different between your volume options?
19:08 monotek restarting glusterfs-server on all nodes didnt helps to....
19:10 monotek not realy... some have server.allow-insecure: on... some others not....
19:11 dbruhn have you tried setting that on the new volumes just to see if that helps?
19:13 monotek no. just saw the nfs shares via "showmount -e 127.0.0.1" on one of the 4 nodes...
19:14 monotek but can mount against this node either...
19:14 dbruhn Do you have iptables blocking something?
19:14 monotek no. alls servers on same switch. no local iptables
19:14 dbruhn anything in the logs?
19:15 dbruhn when you try and mount
19:15 lmickh joined #gluster
19:15 monotek not in the brick logs... have too look for the other logs....
19:16 dbruhn the brick logs would only have entries if you had debugging turned on, or there was a problem at the brick level
19:16 dbruhn you probably want to look at the messages log on the client side
19:16 dbruhn and the /var/lib/glusterfs/nfs log
19:16 glustercjb1 does anyone know what the use case is for peer_add_secret_pub and peer_gsec_create is?
19:16 dbruhn and maybe the other logs in that directory on the server
19:17 glustercjb1 in the 3.5 beta
19:17 rotbeard joined #gluster
19:17 barnim joined #gluster
19:17 glustercjb1 I mean, I understand what they are doing, but are these just utilities to be run as standalone, or will it be tied into the gluster geo-replication setup somehow?
19:21 nage joined #gluster
19:21 nage joined #gluster
19:22 monotek dbruhn: this is from my nfs log: http://paste.ubuntu.com/7164466/
19:22 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
19:22 monotek maybe gluster thinks i have still split brain?
19:22 monotek but the volumes ar recreated
19:23 monotek but split brain infos says no...
19:23 dbruhn hmm, it looks like something is in split-brain, but I can't imagine that would keep it from mounting
19:26 tziOm joined #gluster
19:27 monotek where comes the self heal info from? where is it stored?
19:28 dbruhn it pulls it out of the logs
19:31 monotek i will have a look tomorrow... still at work... need to go home now (its 20:30 here)....
19:31 monotek thanks for your help
19:45 jblack sorry about that. So... tired...now
19:45 nightwalk joined #gluster
19:46 dbruhn np hope I helped you at least narrow some stuff down
19:46 JPeezy joined #gluster
19:57 vpshastry1 joined #gluster
20:01 gdubreui joined #gluster
20:02 vpshastry1 left #gluster
20:05 YazzY joined #gluster
20:07 klaas_ joined #gluster
20:16 zerick joined #gluster
20:16 juhaj A piece of advice, please: would you trust glusterfs to run a medium-high-performance medium-reliability but high availability filesystem of the size of 100s of TB?
20:16 juhaj Performance needs are ~1 GB/s sustained, preferably over 10G ethernet, but possibly IB
20:17 semiosis pretty sure gluster has done more than that
20:17 dbruhn +1 semiosis
20:17 juhaj References?
20:18 Pavid7 joined #gluster
20:18 juhaj I do not doubt either the size or the bandwidth, but the other demands are not quantifiable, so they are harder to check
20:18 dbruhn I'm have 95TB on gluster systems today
20:18 semiosis well i've heard pandora uses gluster, nothing i can cite though
20:18 dbruhn and have more systems being converted as we speak
20:18 juhaj dbruhn: Converted from what, if I may ask?
20:19 dbruhn Mostly NetApp
20:19 juhaj (One of gluster's advantages is that it can export an existing filesystem unlike, for example, lustre)
20:19 dbruhn recently ended up with a mix of line NFS servers, and some more net app, a bit of Exanet, and equalogics I am pulling data off too
20:20 dbruhn s/line//
20:20 glusterbot What dbruhn meant to say was: recently ended up with a mix of  NFS servers, and some more net app, a bit of Exanet, and equalogics I am pulling data off too
20:20 Isaacabo joined #gluster
20:21 juhaj So I am looking into managing a medium-speed, medium-term shared storage for a few client machines, where users often work part of their workflow on one and other part on the other, so having a single medium-fast filesystem is quite handy. (All systems have fast scratch)
20:22 dbruhn juhaj, the biggest thing with gluster is understanding what you expect from it, it's flexible enough to build to your needs for the most part.
20:22 dbruhn have you actually installed it and played with it some?
20:23 juhaj And I do not see many options for that besides glusterfs: CXFS will do that, as will GPFS and lustre, pNFS does not exist except on an RFC and then the rest of the solutions I know of: ceph, tahoe, glusterfs, gfs are not popular enough (=I do not know of a single site I can just walk to and see it in action) for me to jump right into it
20:24 Matthaeus joined #gluster
20:25 juhaj dbruhn: I've been running it for years for private purposes and despite hours and hours of debugging on this channel, I still do not know how to make acl's work on a replicated+distributed gluster volume. Apart from that, I have no compliants
20:26 * m0zes has 600TB raw right now. can sustain +1GB/s reads/writes. this is with only 2 servers.
20:27 dbruhn m0zes, how much data do you have on that?
20:27 m0zes 180TB of homedirectory space + 50-60TB of "other"
20:28 m0zes that is just used space. I have another 190TB waiting for a need.
20:29 dbruhn I have a lot of extra space to, I built to performance needs though, not space needs.
20:29 dbruhn I am sitting on about 1/2PB of raw disk on my gluster systems right now too
20:29 juhaj m0zes: Nice. You share that with nfs or fuse? And can you describe your setup a bit?
20:29 * m0zes wanted a different architecture... more, smaller servers. money was a problem then.
20:29 juhaj And a little addition to my bandwidth requirement: I need that 1 GB/s from any of the clients, alone.
20:30 Isaacabo Hello Guys
20:31 dbruhn Hi Isaacabo
20:31 juhaj m0zes: My main options are glustre and lustre, but lustre is an overkill (performance and management wise), so I am trying to gauge the options before starting testing later in the spring
20:31 Isaacabo Hi dbruhn
20:32 dbruhn Need anything?
20:32 m0zes juhaj: 2 servers each connected via FC to an 8-chassis DS3512 system with 96x3TB drives. each 18 5-disk raid5 arrays that I then stripe across. then I add lvm to carve it up into volumes. volumes are a mix of DHT, REP and DHT+REP.
20:32 Isaacabo yes, i'm having this really weird behavior with gluster
20:33 Isaacabo please bear with me for a sec
20:33 juhaj m0zes: Then your bricks are filesystems on those lvm volumes?
20:33 m0zes juhaj: yep.
20:33 m0zes xfs
20:34 juhaj m0zes: Why FC and not SAS?
20:34 Isaacabo when doing rsync on gluster mount, im getting the following error file has vanished. But after doing the rsync a couple of times works
20:34 Isaacabo seems like gluster is missing some files
20:35 juhaj m0zes: And is that 1 GB/s you get aggregate out of your servers or a single client reading a file?
20:35 Isaacabo and actually found this on gluster mailing list but no answer so far, is almos the same error
20:35 Isaacabo sorry, on red hat bugzilla
20:35 Isaacabo https://bugzilla.redhat.co​m/show_bug.cgi?id=1028582
20:35 glusterbot Bug 1028582: unspecified, unspecified, ---, pkarampu, MODIFIED , GlusterFS files missing randomly - the miss triggers a self heal, then missing files appear.
20:38 m0zes juhaj: FC because that was what my boss wanted. our load is a bit spiky, and we do DHT on the homes volume. >1GB/s reads for most files. 700-900MB/s writes to most. If you are reading from two seperate files (on different bricks) then you can get a bit higher.
20:38 Isaacabo im running 2 bricks in distribute mode, no replica
20:38 m0zes it all depends on your i/o patterns, though. some of our users do a *lot* of random i/o within single files, or create zillions of tiny files...
20:40 robos joined #gluster
20:40 Isaacabo on the client log, i can see this on the files that where reported vanished by rsync
20:40 Isaacabo [2014-03-26 23:34:14.201340] E [dht-linkfile.c:213:dht_linkfile_setattr_cbk] 0-shares-dht: setattr of uid/gid
20:41 m0zes we are getting ready to add try SSD caching (via flashcache or enhanceio) soon. which may help, if we can get fast enough SSDs.
20:52 dbruhn m0zes, are you planning on using dm-cache?
20:52 dbruhn Isaacabo, what version of gluster?.
20:53 Isaacabo glusterfs 3.4.2 on centos 6.5
20:54 dbruhn Isaacabo, are the files on the bricks that are missing?
20:54 m0zes dbruhn: I was planning on using flashcache or enhanceio
20:54 juhaj m0zes: Right. Did your boss have a reason, too? (I have lots of FC around but no reason: SAS would do no worse for less price.) IO pattern is restricted mostly by sequential access to large files at the moment, but for the shared filesystem, expectation is "cp /fast/scratch/file /new/shared/filesystem/", so sequential again. But that cp must get 1 GB/s, otherwise we see no real gain to NFS
20:55 Isaacabo no, that is the issue, the files are there
20:56 Isaacabo looks like rsync builds the list file to send, then when it tries to locate the file and the file "is gone". But the file is on gluster
20:56 dbruhn m0zes, I am actually starting to put together a 40TB usable 92TB raw system based on straight SSD and IB
20:56 dbruhn it's coming up cheaper than 15k SAS
20:56 Isaacabo last time had too run 3 times the rsync to do a full sync, 3rd time sent all the files without any issue
20:56 m0zes dbruhn: we didn't even bother with real SAS. just the nearline SAS stuff.
20:57 dbruhn I have one 15K SAS system with 144 drives, and then a couple of SATA systems right now.
20:59 m0zes dbruhn: I've been looking at these as a next purchase: http://www.serversdirect.com/Components/Chassis/​id-SY6716/Quanta_1U_Hadoop_S100-L1JSL-10G_Server# 64 of those filled with 4TB drives, and ssd caching.
20:59 glusterbot Title: Quanta 1U Hadoop S100-L1JSL-10G Server (at www.serversdirect.com)
20:59 dbruhn I've a crap ton of small I/O to deal with though
20:59 mattappe_ joined #gluster
20:59 m0zes small I/O is the bane of my existence.
20:59 dbruhn My problem is no matter how good the caching in front of the stuff I starve the cache eventually
20:59 dbruhn I am dealing with 99%+ random reads
21:00 m0zes well, it all looks random if you have 150 different boxes running 2500 different jobs ;)
21:00 dbruhn interesting server, I am testing the 2u front loaded 2.5" super micro box right now. 24x front loaded
21:01 dbruhn hahah I wish it was that. I have 12 servers acting like clients, reading the data as fast as they can off disk
21:01 dbruhn on my SAS system anyway
21:02 m0zes I figure something like that can let you spread the load a bit and get a much larger aggregate throughput.
21:02 dbruhn Who do you deal with over at servers direct? Melanie Landayan is my rep
21:03 m0zes we don't. I forget who the actual vendor that sells those to us is. we'd probably have to go out for a bid if we actually wanted to buy that.
21:03 dbruhn Ahh, government?
21:03 m0zes higher ed in KS.
21:04 dbruhn Ahh sweet
21:04 m0zes I like the job, not so much the bureaucracy.
21:05 m0zes luckily my boss (like any good boss) shields me from most of it.
21:05 dbruhn Been there, I used to be at a public company, I spent more time dealing with people blocking me from doing my job than doing my job
21:08 dbruhn You should let me know how that caching turns out for you.
21:18 andreask joined #gluster
21:21 juhaj m0zes, dbruhn: thanks for your input. I will probably poke you again in a while to talk more about the configs.
21:21 dbruhn juhaj, np, I always enjoy talking about configs. Mine are totally different than m0zes
21:23 juhaj Is there any point in splitting this into lighter servers like lustre often is done?
21:24 dbruhn I am running a lot of servers
21:24 dbruhn just depends on your needs
21:49 mattapperson joined #gluster
21:57 ctria joined #gluster
22:00 fidevo joined #gluster
22:11 juhaj Well, not many small files. Average file size should be around a few hundred MB, mostly sequential IO, concurrent accesss from 4 client machines, each running potentially 500 processes doing IO at any one time [one hopes most of that IO is done through MPI-IO into a single r just few files]
22:12 juhaj Oh, sorry, one of the clients is smaller, will not run that many, just perhaps 50
22:13 JoseBravo Can I have 4TBx4 RAID 10, and inside this array put the OS to support glusterfs. Or it is recommended to have a separate disk to the OS?
22:30 elico what FS I can use to dynamically extend on linux?
22:30 elico for underlying fs of glusterfs..
22:43 glustercjb joined #gluster
22:48 wushudoin joined #gluster
22:53 juhaj elico: xfs
22:53 juhaj elico: On linux you pretty much want xfs no matter what
22:53 elico juhaj: why is that? and I was looking at write cache..
22:54 juhaj You want a fs on a cache?-o
22:55 elico I have two nodes.
22:55 juhaj Xfs has, when set up properly, the best performance-relibility-manageability balance. It pretty much is the best in each category and if ext4 wins for small files, the margin is tiny
22:55 elico the read speed is fine using nfs but when using gluster it's a disaster.
22:56 elico I will try to see how the night goes with the nfs mount also with write..
22:56 juhaj I am not an expert on gluster (not yet at least) but I am sure there is something wrong with your config if that is the case. Someone else here will probably be able to help
22:56 elico seems for now to be better then glusterfs fuse mount.
22:57 elico OK I am waiting for whoever that will try to help me understand what is the issue. (2 nodes 500GB disks 4GB ram i5)
22:57 elico (1gbps cables with 941 maxed)
23:09 kam270_ joined #gluster
23:09 Isaacabo @juhaj we have our gluster on XFS, do you have any pointer for tuning it? we have a mix of small files and big files
23:14 kam270_ joined #gluster
23:20 tdasilva joined #gluster
23:22 kam270 joined #gluster
23:36 nightwalk joined #gluster
23:37 juhaj Isaacabo: You will need to know the access pattern, but the most important part of the tuning is to align XFS with the underlaying block devices when the filesystem is created. For example, if you have a RAID5 of 10+parity discs, you will want to stripe across all the discs (at least for big files: for small files I am not so sure, probably not so good then): mkfs.xfs -d <options>
23:38 juhaj We chose to stripe across LUNs instead of discs because a) we do not want to let a single user get all the bandwidth from it and b) we do have quite a bit of small files, too. The first point is not actually completely true: a single user can still take all the bandwidth, even without meaning to, but at least it does not happen as often
23:39 juhaj But now it's bedtime
23:43 robos joined #gluster
23:45 kam270 joined #gluster
23:45 Isaacabo jajaja, ok, thanks
23:45 Isaacabo the last brick was a raid6 of 13 disk for a total of 8TB
23:47 Isaacabo interesting, going to investigate that further
23:48 diegows joined #gluster
23:51 kam270 joined #gluster
23:54 bms joined #gluster
23:57 vincent_1dk joined #gluster
23:57 bms unadvisadly, i deleted 30TB of files from a brick in a distributed volume in gluster 3,3. df still shows space used.  du shows file data to be in .glusterfs. If I delete .glusterfs, will gluster rebuild it ok?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary