Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:19 Gill joined #gluster
01:04 klaas joined #gluster
01:24 wkf joined #gluster
01:25 jmarley joined #gluster
01:39 Gill joined #gluster
01:53 harish_ joined #gluster
02:01 archers joined #gluster
02:22 purpleidea joined #gluster
02:22 purpleidea joined #gluster
02:34 purpleidea joined #gluster
02:47 nangthang joined #gluster
03:14 nonreplicate joined #gluster
03:17 nonreplicate hoping someone can point me in the right direction. I have replicate type gluster with 1 x 3 bricks. Volume info and status all look ok, but files are not being replicated across the three bricks. CentOS 7, selinux disabled, firewalld configured.
03:37 gem joined #gluster
03:43 ira joined #gluster
03:45 archers joined #gluster
03:48 itisravi joined #gluster
03:50 ppai joined #gluster
04:01 nbalacha joined #gluster
04:04 nstageberg joined #gluster
04:04 kumar joined #gluster
04:07 nstageberg Hello everyone!  I'm somewhat new to gluster and I'm setting up a proof of concept for my company right now.  I have two nodes set up with some data on them, with distribution and replication.  I've just added a third node and I've found some documentation that indicates I need to reorder the nodes using replace-brick to prevent both replicas from ending up on the same node.  However when I run replace-brick it says the command is de
04:11 meghanam joined #gluster
04:12 kanagaraj joined #gluster
04:18 meghanam joined #gluster
04:23 kkeithley1 joined #gluster
04:30 siel joined #gluster
04:31 RameshN joined #gluster
04:33 atinmu joined #gluster
04:35 ndarshan joined #gluster
04:36 kdhananjay1 joined #gluster
04:36 poornimag joined #gluster
04:37 ira_ joined #gluster
04:56 soumya joined #gluster
05:04 Bhaskarakiran joined #gluster
05:05 anoopcs joined #gluster
05:07 jiffin joined #gluster
05:09 kotreshhr joined #gluster
05:14 schandra joined #gluster
05:18 jiku joined #gluster
05:22 nbalacha nstageberg: replace-brick has been deprecated
05:23 nbalacha nstageberg: can you please point me to the documentation that says to use replace-brick
05:23 nbalacha nstageberg: also, can you let me know the details of your volume setup?
05:23 nstageberg just an old site I found:
05:23 nstageberg https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
05:24 karnan joined #gluster
05:24 nstageberg I currently have 14 bricks in my volume, annotated as <server>:<hostname> 1:1 2:1 1:2 2:2 1:3 2:3 1:4 2:4 1:5 2:5 3:1 3:2 3:3 3:4
05:24 hchiramm_ joined #gluster
05:24 nstageberg I just added 3:1 to 3:4, and as I understand it, that means both replicas of the data could be written to host 3, correct?
05:24 nstageberg I was trying to address that by reordering the bricks using replace-brick
05:25 deepakcs joined #gluster
05:25 nstageberg err, I mean to say <server>:< brick> not <server>:<hostname>
05:27 nstageberg F
05:27 nstageberg http://i1103.photobucket.com/albums/g464/greywolf40/volume_zpszgmedrzt.png
05:27 nstageberg that is my current volume configuration
05:30 kamlesh joined #gluster
05:30 vimal joined #gluster
05:31 soumya joined #gluster
05:32 kamlesh joined #gluster
05:32 hagarth joined #gluster
05:32 nbalacha nstageberg: let me take a look and get back
05:33 nstageberg thanks!
05:34 nbalacha nstageberg: have you got data on this volume already?
05:35 kamlesh joined #gluster
05:35 nbalacha nstageberg: you are correct, bricks 11-14 are on the same host so yes, both replicas will be written to the same host
05:36 kshlm joined #gluster
05:36 nstageberg That's what I was afraid of.  What is the proper procedure to add a new node/new bricks to an existing cluster and have replicas be distributed to different hosts?
05:37 kamlesh joined #gluster
05:37 spandit joined #gluster
05:38 rafi joined #gluster
05:38 nbalacha nstageberg: every set of 2 brick forms a replica set, so you could just add the bricks such that every other one is on a different node
05:38 nbalacha like bricks 1-10 on your volume
05:40 nstageberg right, but if bricks 1-10 already contain data in an online volume, how do I insert the new bricks without recreating the volume?
05:41 nbalacha nstageberg: you would need to add bricks like this: gluster volume add-brick <node1>:<brick1>  <node2>:<brick1> <node1>:<brick2>  <node2>:<brick2>
05:41 nbalacha nstageberg: in the above <node1>:<brick1> <node2>:<brick1> for a single replica set
05:42 ashiq joined #gluster
05:42 nstageberg yes, but all bricks on nodes 1 and 2 had already been allocated, and now I have a node 3 with all bricks unallocated
05:42 Manikandan joined #gluster
05:42 Manikandan_ joined #gluster
05:42 sripathi joined #gluster
05:42 nbalacha nstageberg: can you add another node?
05:42 purpleidea joined #gluster
05:42 purpleidea joined #gluster
05:42 nstageberg I have to think this is a common use case; you set up a cluster and it runs for a year, then you purchase new servers to add capacity to the existing cluster
05:42 nstageberg yes I just added node 3
05:43 nstageberg oh you are saying add a fourth node
05:43 nbalacha nstageberg : yes, is that something that is possibel ?
05:43 nstageberg it's not something I had planned for, but it is absolutely possible
05:44 hgowtham joined #gluster
05:44 nstageberg is that the recommended practice for this use case?
05:44 nbalacha nstageberg: well, one option is to remove the newly added bricks and add them again including a node4
05:45 nstageberg sort of a different from how I understood gluster to work, so I would essentially have two groups of two servers that just happened to serve data out of the same volume
05:46 nbalacha nstageberg: can you please explain what you were expecting?
05:46 nbalacha nstageberg: so I can understand the use case better
05:49 nstageberg nbalacha: well, now that I understand it correctly, it's hard to explain how I understood it incorrectly :)  Somehow I thought it was like RAID 5, not RAID 1
05:50 nbalacha nstageberg: :).
05:50 nstageberg nbalacha: let me ask you this, in my configuration, is brick 1:1 an exact copy of brick 2:1?
05:51 nbalacha nstageberg: in your config , based on the volume info posted, brick 1 and brick 2 are replicas of each other
05:51 nbalacha nstageberg: then brick3 and brick4 , etc etc. Is that what you were asking?
05:52 nstageberg yes
05:53 nstageberg somehow I thought gluster was doing something like parity bit checking, and not just literally making two copies of the files.  So as the cluster scales out, I will always only get half of the storage I have, similar to raid 1 or 10.
05:54 nbalacha nstageberg: a distribute replicate volume will make copies, so yes, you will only get half the storage
05:55 nbalacha nstageberg: however, I think you might be looking for something like the erasure coding feature that is currently being worked on
05:55 nbalacha nstageberg: I can point you to someone working on that feature if you would like to know more about it
05:56 anil joined #gluster
05:56 nstageberg my company is going to have pretty significant storage needs, so I wonder if it might be better to configure each node with a raid 5/6 and then use gluster to do distribution without replication.  The main issue there is that there is a single point of failure if a whole node goes down, and that is not acceptable
05:59 bharata-rao joined #gluster
05:59 overclk joined #gluster
06:02 hagarth nstageberg: typically most gluster deployments use raid6 for protection from disk failures and gluster's synchronous replication for protection from node failures
06:06 nstageberg hagarth: One of our production file archive servers currently has a 20-disk raid 6, and we get abysmal IOPS performance, particularly for small files.  For gluster we have tried using 20 standalone disks and the performance has been dramatically better, the main drawback is obviously total capacity.  Trying to figure out some rough numbers on what our costs would be to hit the performance/capacity targets if this solution were to sca
06:07 kamlesh joined #gluster
06:07 hagarth nstageberg: with standalone disks, the rebuild process in case of a disk failure would be more involved.
06:08 hagarth nstageberg: what is the use case/workload that you are considering to use gluster for?
06:12 hchiramm joined #gluster
06:12 nstageberg hagarth: we presently have two datacenters, each datacenter has a high performance SAN and a low performance NAS.  We have about ~20TB of net storage on both.  We store older data that is not accessed on the NAS and newer data that is accessed frequently on the SAN.  We are nearly out of space on both, and our daily volume is growing very quickly.  Our file archive in total consists of hundreds of millions of files, most of which are
06:18 hagarth nstageberg: how many directories contain these files and what is the approximate size of your file?
06:18 nstageberg hagarth: perhaps ten million directories, though we are getting that number down quickly, the average file is about 100k
06:20 hagarth nstageberg: the number of files per directory is not very high then? a few hundreds per directory?
06:20 hagarth are you considering gluster as a replacement for your existing NAS?
06:21 nstageberg hagarth: that's correct.  There are many directories that have maybe a dozen files, then our newer archiving system puts 4096 files in each directory, that has seemed to give us the best performance so far
06:21 hagarth nstageberg: ok
06:23 nstageberg hagarth: We need to replace the NAS units yes, and we hope that the performance will be high enough that we can move much of our higher performance storage to gluster as well.  The NAS needs maybe 100 IOPS, however the 20-disk raid 6 struggles to supply even that consistently.  The SAN currently supplies maybe 2500 IOPS, and if this solution could supply just 1000 IOPS we could move most data off of the SAN
06:23 nshaikh joined #gluster
06:24 hagarth nstageberg: what is the read:write ratio approximately like in your workload?
06:24 nstageberg hagarth: most of our data is never accessed, then some of it is accessed a modest amount, and then .0001% comprises most of the IOPS, that would stay on the SAN
06:25 nstageberg hagarth: reads are done by users and must be served very quickly, writes are done by our software and can be served slowly, which seems like a great fit with gluster's capabilities.  Altogether there maybe ten times more reads than writes.
06:26 ashiq joined #gluster
06:26 hagarth nstageberg: indeed, looks to be the case. what numbers have you got from raid5?
06:27 hagarth have you done any read-ahead tuning with your raid controller?
06:28 aravindavk joined #gluster
06:30 harish joined #gluster
06:30 atalur joined #gluster
06:32 nstageberg hagarth: we have a 20 disk raid 6 which is running ZFS software raid.  Each disk is a 2TB 7200rpm consumer class SATA disk.  We get maybe 300IOPS peak, but it can dip down pretty low, very inconsistent.  The server was originally built as an extremely low cost storage solution, and it's served that purpose well, but even just the process of archiving data to it and keeping everything sync'd up is now overwhelming its throughput.  The
06:32 nonreplicate Anyone able to assist with a replication setup that is not replicating? (3 bricks, all are online and look ok. Centos7, no selinux or firewalld)
06:38 nonreplicate nstageberg: ZFS on hardware raid sounds like a terrible idea
06:38 hagarth nonreplicate: are you creating data from a gluster mount?
06:39 nstageberg nonreplicate: we currently use ZFS software raid, there is no hardware raid.  We have not been very happy with our experience with ZFS and have not used it again.  This is our legacy archive that we are hoping to replace with gluster :)
06:41 hagarth nstageberg: xfs is what most gluster deployments use today
06:41 nonreplicate hagarth:  I am trialling gluster at home, so am expecting it to be something stupid I've done... I have tried touch'ing and cp'ing various files in the glustered brick (/opt/gluster/data/brick) and expecting to see the same file on the other two machines a short time later
06:42 nstageberg hagarth: yes we have used XFS for our gluster implementation, and it is what we use in general
06:42 hagarth nonreplicate: gluster's replication works only if you mount a volume and create data from there. writing directly to bricks is not recommended.
06:45 nonreplicate That was exactly my issue. Working perfectly now.  Thank you so much!
06:47 ndarshan joined #gluster
06:54 vikumar joined #gluster
06:55 atalur joined #gluster
07:00 glusterbot News from newglusterbugs: [Bug 1203185] Detached node list stale snaps <https://bugzilla.redhat.com/show_bug.cgi?id=1203185>
07:09 nbalacha joined #gluster
07:14 karnan joined #gluster
07:30 glusterbot News from newglusterbugs: [Bug 1206539] Tracker bug for GlusterFS documentation Improvement. <https://bugzilla.redhat.com/show_bug.cgi?id=1206539>
07:30 glusterbot News from newglusterbugs: [Bug 1206587] Replace contrib/uuid by a libglusterfs wrapper that uses the uuid implementation from the OS <https://bugzilla.redhat.com/show_bug.cgi?id=1206587>
07:35 DV joined #gluster
07:45 o5k__ joined #gluster
07:45 ndarshan joined #gluster
07:47 o5k joined #gluster
07:50 bharata-rao joined #gluster
08:03 soumya_ joined #gluster
08:11 aravindavk joined #gluster
08:16 lifeofguenter joined #gluster
08:19 poornimag joined #gluster
08:21 hagarth joined #gluster
08:26 bala joined #gluster
08:27 lifeofguenter joined #gluster
08:29 nbalacha joined #gluster
08:54 lifeofguenter joined #gluster
09:00 glusterbot News from newglusterbugs: [Bug 1209113] Disperse volume: Invalid index errors in readdirp requests <https://bugzilla.redhat.com/show_bug.cgi?id=1209113>
09:00 glusterbot News from newglusterbugs: [Bug 1207532] BitRot :- gluster volume help gives insufficient and ambiguous information for bitrot <https://bugzilla.redhat.com/show_bug.cgi?id=1207532>
09:01 nshaikh joined #gluster
09:16 soumya_ joined #gluster
09:26 ashiq joined #gluster
09:27 poornimag joined #gluster
09:29 lifeofguenter joined #gluster
09:30 hgowtham joined #gluster
09:36 kovshenin joined #gluster
09:36 kotreshhr1 joined #gluster
09:50 bharata-rao joined #gluster
09:56 RameshN joined #gluster
10:00 dusmant joined #gluster
10:00 glusterbot News from newglusterbugs: [Bug 1207735] Disperse volume: Huge memory leak of glusterfsd process <https://bugzilla.redhat.com/show_bug.cgi?id=1207735>
10:09 jmarley joined #gluster
10:09 nbalacha joined #gluster
10:12 anrao joined #gluster
10:19 harish_ joined #gluster
10:28 atalur joined #gluster
10:30 glusterbot News from newglusterbugs: [Bug 1209117] [Snapshot]  Unable to edit, list , delete scheduled jobs when scheduler status is disabled <https://bugzilla.redhat.com/show_bug.cgi?id=1209117>
10:30 glusterbot News from newglusterbugs: [Bug 1209120] [Snapshot] White-spaces are not handled properly in Snapshot scheduler <https://bugzilla.redhat.com/show_bug.cgi?id=1209120>
10:33 anrao joined #gluster
10:33 poornimag joined #gluster
10:35 soumya_ joined #gluster
10:48 purpleidea joined #gluster
10:48 purpleidea joined #gluster
10:50 atinmu joined #gluster
10:54 Manikandan_ joined #gluster
11:02 schandra joined #gluster
11:12 ashiq joined #gluster
11:12 kotreshhr joined #gluster
11:13 hagarth joined #gluster
11:18 diegows joined #gluster
11:19 meghanam joined #gluster
11:22 atalur joined #gluster
11:26 ricky-ti1 joined #gluster
11:29 vikumar joined #gluster
11:30 RameshN joined #gluster
11:31 glusterbot News from newglusterbugs: [Bug 1207867] Are not distinguishing internal vs external FOPs in tiering <https://bugzilla.redhat.com/show_bug.cgi?id=1207867>
11:31 glusterbot News from newglusterbugs: [Bug 1209129] DHT Rebalancing within a tier will cause the file to lose its heat(database) metadata <https://bugzilla.redhat.com/show_bug.cgi?id=1209129>
11:31 pdrakeweb joined #gluster
11:31 atinmu joined #gluster
11:37 shaunm joined #gluster
11:42 Anjana joined #gluster
11:43 Anjana shaunm: ping
11:43 glusterbot Anjana: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
11:45 vimal joined #gluster
11:47 edwardm61 joined #gluster
11:57 shaunm Anjana: hey
11:58 chirino joined #gluster
11:59 Anjana shaunm: Would you want me to reschedule our meeting as I did not see you accept it?
12:00 shaunm I accepted it. not sure why you wouldn't have gotten the notification
12:00 shaunm but there's no location info in the meeting
12:01 hagarth shaunm: would thursday work for you? we could also get spot to the meeting.
12:02 shaunm my thursday is wide open
12:02 hagarth shaunm: sounds great. Anjana - would you be able to send an updated invite then?
12:02 misc spot is likely at pycon.us ( with me ) on thursday
12:02 Anjana shaunm: will resend the invite for thursday with the bluejeans information
12:04 hagarth misc: would that be 8th through 16th?
12:05 tanuck joined #gluster
12:05 hagarth Anjana, shaunm: let us give spot a chance to join in, if not we can update him subsequently.
12:05 misc hagarth: yeah, you are speaking of this thursday ?
12:05 Anjana hagarth: ok, re-sending the invite for thursday
12:06 hagarth misc: right, 04/09
12:06 misc ( not sure if he stay the whole event, need to see his calendar )
12:06 hagarth misc: ok
12:08 hagarth Anjana++ thanks!
12:08 glusterbot hagarth: Anjana's karma is now 1
12:08 Anjana shaunm: Inclued the meeting location to the updated invite
12:08 Anjana thanks, hagrath
12:11 T3 joined #gluster
12:12 itisravi joined #gluster
12:13 meghanam joined #gluster
12:23 markd_ joined #gluster
12:27 Gill joined #gluster
12:30 anoopcs joined #gluster
12:31 glusterbot News from newglusterbugs: [Bug 1209138] [Backup]: Packages to be installed for glusterfind api to work <https://bugzilla.redhat.com/show_bug.cgi?id=1209138>
12:35 B21956 joined #gluster
12:49 wkf joined #gluster
12:51 hchiramm joined #gluster
12:52 kotreshhr left #gluster
12:59 LebedevRI joined #gluster
13:05 hchiramm_ joined #gluster
13:09 Anjana joined #gluster
13:09 julim joined #gluster
13:12 jmarley joined #gluster
13:16 plarsen joined #gluster
13:22 firemanxbr joined #gluster
13:27 hamiller joined #gluster
13:28 georgeh-LT2 joined #gluster
13:31 glusterbot News from resolvedglusterbugs: [Bug 1182514] Force add-brick lead to glusterfsd core dump <https://bugzilla.redhat.com/show_bug.cgi?id=1182514>
13:32 dgandhi joined #gluster
13:33 dgandhi joined #gluster
13:58 JamesG joined #gluster
13:58 obnox joined #gluster
13:58 Champi joined #gluster
13:58 frankS2 joined #gluster
13:58 saltsa joined #gluster
13:58 mrEriksson joined #gluster
13:58 Sjors joined #gluster
13:58 johnmark joined #gluster
13:58 delhage joined #gluster
13:58 kalzz joined #gluster
13:58 VeggieMeat joined #gluster
13:58 owlbot joined #gluster
13:58 RobertLaptop joined #gluster
13:58 mibby joined #gluster
13:58 mattmcc joined #gluster
13:58 coreping joined #gluster
13:58 Ramereth joined #gluster
13:58 bjornar joined #gluster
13:58 atrius` joined #gluster
13:58 afics joined #gluster
13:58 codex joined #gluster
13:58 johnnytran joined #gluster
13:58 sadbox joined #gluster
13:58 JustinClift joined #gluster
13:58 msvbhat joined #gluster
13:58 tuxcrafter joined #gluster
13:58 gomikemike joined #gluster
13:58 dblack joined #gluster
13:58 JoeJulian joined #gluster
13:58 calston joined #gluster
13:58 capri joined #gluster
13:58 _NiC joined #gluster
13:58 primusinterpares joined #gluster
13:58 tomased joined #gluster
13:58 Telsin joined #gluster
13:58 malevolent joined #gluster
13:58 xavih joined #gluster
13:58 squizzi joined #gluster
13:58 abyss^ joined #gluster
13:58 tdasilva joined #gluster
13:58 Bosse joined #gluster
13:58 ALarsen joined #gluster
13:58 mkzero joined #gluster
13:58 raatti joined #gluster
13:58 XpineX joined #gluster
13:58 Arrfab joined #gluster
13:58 the-me joined #gluster
13:58 zerick joined #gluster
13:58 m0zes joined #gluster
13:58 jvandewege joined #gluster
13:58 al joined #gluster
13:58 Dave2 joined #gluster
13:58 tom[] joined #gluster
13:58 nixpanic joined #gluster
13:58 cyberbootje1 joined #gluster
13:58 ccha2 joined #gluster
13:58 harmw joined #gluster
13:58 ThatGraemeGuy joined #gluster
13:58 92AAAWR99 joined #gluster
13:58 xrsanet joined #gluster
13:58 marcoceppi_ joined #gluster
13:58 eclectic joined #gluster
13:58 jcastillo joined #gluster
13:58 masterzen joined #gluster
13:58 partner joined #gluster
13:58 xaeth_afk joined #gluster
13:58 tg2 joined #gluster
13:58 churnd joined #gluster
13:58 ndk joined #gluster
13:58 samppah joined #gluster
13:58 raging-dwarf joined #gluster
13:58 ackjewt joined #gluster
13:58 ckotil joined #gluster
13:58 stickyboy joined #gluster
13:58 Bardack joined #gluster
13:58 xiu joined #gluster
13:58 tessier joined #gluster
13:58 ron-slc joined #gluster
13:58 ninkotech joined #gluster
13:58 ninkotech_ joined #gluster
13:58 fubada joined #gluster
13:58 gothos joined #gluster
13:58 NuxRo joined #gluster
13:58 scuttlemonkey joined #gluster
13:58 Lee- joined #gluster
13:58 jxfx joined #gluster
13:58 necrogami joined #gluster
13:58 Guest58148 joined #gluster
13:58 dastar joined #gluster
13:58 _PiGreco_ joined #gluster
13:58 billputer joined #gluster
13:58 nhayashi joined #gluster
13:58 loctong joined #gluster
13:58 swebb joined #gluster
13:58 sage joined #gluster
13:58 ghenry joined #gluster
13:58 danny__ joined #gluster
13:58 kkeithley joined #gluster
13:58 dockbram joined #gluster
13:58 coredump joined #gluster
13:58 jaank joined #gluster
13:58 Leildin joined #gluster
13:58 ktosiek joined #gluster
13:58 bitpushr joined #gluster
13:58 bivak joined #gluster
13:58 20WAAZNEC joined #gluster
13:58 kbyrne joined #gluster
13:58 d-fence joined #gluster
13:58 bfoster joined #gluster
13:58 oxidane_ joined #gluster
13:58 glusterbot joined #gluster
13:58 ndevos joined #gluster
13:58 jermudgeon_ joined #gluster
13:58 T0aD joined #gluster
13:58 lkoranda joined #gluster
13:58 and` joined #gluster
13:58 foster joined #gluster
13:58 CP|AFK joined #gluster
13:58 cicero joined #gluster
13:58 drue joined #gluster
13:58 Peanut joined #gluster
13:58 Kins joined #gluster
13:58 uebera|| joined #gluster
13:58 R0ok_ joined #gluster
13:58 klaas joined #gluster
13:58 nstageberg joined #gluster
13:58 kumar joined #gluster
13:58 siel joined #gluster
13:58 DV joined #gluster
13:58 o5k joined #gluster
13:58 kovshenin joined #gluster
13:58 anrao joined #gluster
13:58 purpleidea joined #gluster
13:58 shaunm joined #gluster
13:58 vimal joined #gluster
13:58 chirino joined #gluster
13:58 tanuck joined #gluster
13:58 T3 joined #gluster
13:58 Gill joined #gluster
13:58 B21956 joined #gluster
13:58 LebedevRI joined #gluster
13:58 hchiramm_ joined #gluster
13:58 Anjana joined #gluster
13:58 firemanxbr joined #gluster
13:58 hamiller joined #gluster
13:58 georgeh-LT2 joined #gluster
13:58 dgandhi joined #gluster
13:58 mikedep333 joined #gluster
13:58 DV_ joined #gluster
13:58 plarsen joined #gluster
13:58 brianw joined #gluster
13:58 atrius joined #gluster
13:58 fyxim joined #gluster
13:58 lezo joined #gluster
13:58 osiekhan3 joined #gluster
13:58 sankarshan_away joined #gluster
13:58 sac joined #gluster
13:58 social joined #gluster
13:58 khanku joined #gluster
13:58 ultrabizweb joined #gluster
13:58 Marqin joined #gluster
13:58 vincent_vdk joined #gluster
13:58 JPaul joined #gluster
13:58 crashmag joined #gluster
13:58 tigert joined #gluster
13:58 misc joined #gluster
13:58 edualbus joined #gluster
13:58 harish_ joined #gluster
13:58 baoboa joined #gluster
13:58 balacafalata joined #gluster
13:58 17SAB8NFB joined #gluster
13:58 papamoose joined #gluster
13:58 monotek1 joined #gluster
13:58 Intensity joined #gluster
13:58 frakt joined #gluster
13:58 Arminder joined #gluster
13:58 RaSTar joined #gluster
13:58 JonathanD joined #gluster
13:58 nage joined #gluster
13:58 l0uis joined #gluster
13:58 twx joined #gluster
13:58 johnbot joined #gluster
13:58 puiterwijk joined #gluster
13:58 yosafbridge joined #gluster
13:58 samsaffron___ joined #gluster
13:58 jmarley joined #gluster
13:58 hchiramm joined #gluster
13:58 edwardm61 joined #gluster
13:58 balacafalata-bil joined #gluster
13:58 lyang0 joined #gluster
13:58 ttkg joined #gluster
13:58 suliba joined #gluster
13:58 devilspgd joined #gluster
13:58 morse joined #gluster
13:58 kke joined #gluster
13:58 dev-zero joined #gluster
13:58 mdavidson joined #gluster
13:59 hagarth joined #gluster
13:59 _Bryan_ joined #gluster
14:01 yosafbridge joined #gluster
14:01 yosafbridge joined #gluster
14:01 maveric_amitc_ joined #gluster
14:19 meghanam joined #gluster
14:22 ricky-ticky joined #gluster
14:25 Manikandan joined #gluster
14:25 Manikandan_ joined #gluster
14:26 DV__ joined #gluster
14:31 glusterbot News from newglusterbugs: [Bug 1207624] BitRot :- scrubber is not detecting rotten data and not marking file as 'BAD' file <https://bugzilla.redhat.com/show_bug.cgi?id=1207624>
14:31 glusterbot News from newglusterbugs: [Bug 1201724] Handle the review comments in bit-rot patches <https://bugzilla.redhat.com/show_bug.cgi?id=1201724>
14:32 deepakcs joined #gluster
14:32 wushudoin joined #gluster
14:35 nstageberg joined #gluster
14:35 jpds joined #gluster
14:36 jpds Someone know what happened to http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/ ?
14:36 jpds It still has a reference at https://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server
14:40 vimal joined #gluster
14:52 julim joined #gluster
14:52 roost joined #gluster
14:54 jmarley joined #gluster
14:57 jbrooks joined #gluster
14:58 virusuy joined #gluster
14:58 virusuy joined #gluster
15:01 jobewan joined #gluster
15:11 ricky-ticky joined #gluster
15:15 jpds Also, could someone explain to me what the difference between geo-replication and replica volume is?
15:16 * jpds finds https://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Replicated_Volumes_vs_Geo-replication
15:18 ricky-ticky1 joined #gluster
15:22 ipmango__ joined #gluster
15:28 lpabon joined #gluster
15:34 nangthang joined #gluster
15:35 cicero semiosis: do you have any suggestions for replacing the glusterfs-3.3 ppa that you had? even if we already have a few boxes locally with that installed. thanks!
15:36 thangnn_ joined #gluster
15:40 lexi2 joined #gluster
15:40 lexi2 hello i am trying what i am sure lots of people have tried before is run glusterfs with sqlite
15:41 lexi2 i did some googling and found mixed reviews on gluster with sqlite not sure if it was version based or not.
15:41 lexi2 I ran a few small tests and got some mixed results read perfomance seemed great write perfomance seemed not so much when i made a small write i noticed a decent amount of network traffic
15:41 lexi2 wanted to know if gluster was using a large block size or something
15:50 vipulnayyar joined #gluster
15:56 cicero any db over a remote fs sounds like a bad idea :P
16:04 jbrooks joined #gluster
16:09 dev089 joined #gluster
16:11 _Bryan_ joined #gluster
16:16 dev089 hi, i already tried the botme history, and i also read https://joejulian.name/blog/optimizing-web-performance-with-glusterfs/. I basically want to keep two webservers data folder in sync. Both run on different ESXi in a datacenter. The servers are linked with 1Gbit cards. Traffic on the webserver is max (!) a few thousand visitors each day. This setup is more for HA than load balancing. Should I use glusterfs or would something lik
16:17 dev089 my main concern simply is: is the "small files" problem that serious, or will it be fine when i go through the websites and make sure most use autoloading and put varnish in front of each one?
16:18 lexi2 hello?
16:18 glusterbot lexi2: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:19 hchiramm_ joined #gluster
16:23 RameshN joined #gluster
16:30 halfinhalfout joined #gluster
16:30 soumya_ joined #gluster
16:35 lalatenduM joined #gluster
16:38 JoeJulian dev089: Sounds like it should be fine.
16:40 JoeJulian jpds: The community.gluster.org pages was a Q&A 3rd party service that ceased operation. All the good stuff and thousands of pages of crap were all lost.
16:41 plarsen joined #gluster
16:41 jpds JoeJulian: Ah.
16:41 misc JoeJulian: what was the service name ?
16:42 JoeJulian I don't remember. johnmark would know.
16:42 misc ( so I can add to my list of story on outsourcing )
16:42 JoeJulian Honestly, I was glad to see it go.
16:43 JoeJulian I was constantly amazed at how few people understand the concept of "question".
16:43 halfinhalfout does anyone know where the native client keeps a local copy of the volfile?
16:43 jpds So, what happens if I don't mount a client on my VM?
16:43 JoeJulian It doesn't. It's retrieved from a server and kept in memory.
16:44 rotbeard joined #gluster
16:44 jpds I have two nodes replicating data, and they mount the bricks on themselves.
16:45 jpds brick = /srv/data-gluster, mount = /srv/data
16:45 JoeJulian jpds: You'll need to frame that question a bit more. In its current state, nothing happens, you just read/write to the VM's image.
16:45 hchiramm_ joined #gluster
16:45 jpds Yet I can see the data in /srv/data-gluster too?
16:45 JoeJulian Ah
16:45 halfinhalfout joeJulian: thx. I'm trying to figure out what to expect on a client if the IP address of 1 of the gluster servers proding a replicate volume has it's IP address change
16:46 jpds So, do I even need the client? That's what I want to know.
16:46 JoeJulian Operations have to go through the client mount or it's like using dd to write blocks of data to the middle of a disk and expecting xfs/ext4/whatever to know what to do with that data.
16:46 jpds Aha, OK.
16:47 JoeJulian ~hostnames | halfinhalfout
16:47 glusterbot halfinhalfout: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
16:47 JoeJulian If your volume was created with hostnames, you're golden.
16:47 JoeJulian If not, it's a bit trickier.
16:51 halfinhalfout ok. my volume was created w/ hostnames.
16:51 JoeJulian perfect. Change the hostname resolution and you're good.
16:52 jpds Is there a way to make a replica volume carry on working on one host if the other goes down?
16:52 JoeJulian Already does that.
16:53 jpds Hmm, my website stops working when a node shuts down.
16:54 JoeJulian Check the client log for clues.
16:54 * jpds wonders if it's the profiling.
16:54 halfinhalfout I had a problem the other day where the IP address of 1 server changed after it came up after a pro-longed down-time, and it took a while for the correct IP to get to all the clients. at the same time I had the volume in question go off-line to the clients, I think 'cause the heal process was taking too much I/O.
16:54 JoeJulian shouldn't be.
16:54 jpds JoeJulian: Let me double-check.
16:55 halfinhalfout I was trying to figure out if the volume being unavailable to clients would be entirely attributable to high I/O during heal, or partially attributable to the bad IP address
16:56 hchiramm_ joined #gluster
16:58 jpds [afr-self-heal-common.c:2212:afr_self_heal_completion_cbk] 0-data-store-replicate-0: background  entry self-heal failed on
17:04 victori joined #gluster
17:08 halfinhalfout so, I gather the volume unavailability was all 'cause of the high I/O heal action.
17:08 JoeJulian Well it's not *supposed* to be, there are different priority channels and self-heal is in a lower priority than client activity.
17:09 halfinhalfout clients would try to access the volume on the server w/ the bad IP, fail, and then quickly access the volume on the remaining good server
17:09 JoeJulian but I guess that wasn't always working out. I did see a patch that's supposed to address that in the 3.6 branch. Not sure if it was backported to 3.5. 3.4 I haven't heard that complaint.
17:11 JoeJulian Oh, interesting. So some lengthy time before you do something like that you should shorted the TTL on your hostname, if using dns. (/etc/hosts shouldn't have that behavior)
17:11 JoeJulian s/shorted/shorten/
17:11 glusterbot What JoeJulian meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
17:11 JoeJulian @meh
17:11 glusterbot JoeJulian: I'm not happy about it either
17:12 JoeJulian I'm going to have to take a break and figure out what changed to break the regex replacement in supybot.
17:19 halfinhalfout I see this: https://bugzilla.redhat.com/show_bug.cgi?id=1177418
17:19 glusterbot Bug 1177418: unspecified, unspecified, ---, bugs, CLOSED CURRENTRELEASE, entry self-heal in 3.5 and 3.6 are not compatible
17:21 halfinhalfout referencing 3.6 and above taking full lock on a directory for a shorter time than 3.5
17:23 halfinhalfout but I haven't yet found the fix in 3.6 that changes the way self-heal locks dirs
17:37 julim joined #gluster
17:48 Rapture joined #gluster
17:57 anrao joined #gluster
18:01 plarsen joined #gluster
18:02 jackdpeterson joined #gluster
18:13 Gill joined #gluster
18:30 Sjors joined #gluster
18:32 Anjana joined #gluster
18:42 lexi2 joined #gluster
18:49 o5k_ joined #gluster
18:52 o5k__ joined #gluster
19:04 jpds Anyone know why I constantly see [socket.c:514:__socket_rwv] 0-data-store-client-1: readv failed (No data available) - in my logs?
19:06 jpds I rebooted a peer (B), they're back online and connected to the other peer (A), but B sees the A as Disconnected.
19:07 jpds Both are running 3.3
19:11 _Bryan_ joined #gluster
19:43 jbrooks joined #gluster
19:43 verdurin joined #gluster
19:54 roost joined #gluster
20:17 semiosis cicero: you could check the apt cache for the packages, probably in /var/lib/apt, or somewhere under /var, and dig them out
20:17 semiosis otherwise i strongly recommend upgrading
20:17 semiosis @3.4 upgrade notes
20:17 glusterbot semiosis: http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4/
20:18 semiosis the upgrade to 3.4 should be pretty easy, and there are packages for 3.4 in the ,,(ppa)
20:18 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
20:18 semiosis cicero: also note, you shouldn't rely on someone else's PPA for production deployments.  set up your own APT repo, or at least use your own PPA.
20:27 DV joined #gluster
20:46 papamoose left #gluster
20:47 AndroUser joined #gluster
20:48 AndroUser Hi all, I'm looking at implementing glusterfs into a health care environment and I'm wondering if anyone has any experience with glusterfs and dicom/pack systems?
20:51 doekia hi my single (mitigation of network issue) does not want to start NFS on local host ... what is the trick?
20:52 B21956 left #gluster
20:53 tg2 hmm, remove-brick fails saying that a rebalance is in progres... but gluster volume status all tasks doesn't show a rebalance...
20:53 tg2 anywhere to look for this?
20:53 ildefonso joined #gluster
20:57 plarsen joined #gluster
20:59 deniszh joined #gluster
21:27 tg2 JoeJulian, you know of anything that causes a volume to think it's doing something that it isn't?
21:27 tg2 http://pastebin.com/pGK5yBD0
21:27 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
21:28 tg2 http://www.fpaste.org/207760/
21:30 tg2 I'm aware there's a case where if a file is being copied during a rebalance that it will stay in rebalance state until that file is done
21:30 tg2 but this has been stopped for days
21:30 JoeJulian Nope, no clue. That's always annoyed me.
21:33 tg2 hm, wonder if there is a process lock somewhere
21:37 semiosis mrdougalmon: haven't heard anything about dicom/pack systems but if you say a bit more about your needs maybe someone can help anyway
21:39 MugginsM joined #gluster
21:44 tg2 mrdougalmon, you mean PACS?
21:47 lexi2 joined #gluster
21:48 mrdougalmon Dicom is a standard file type used for x-rays, mri scanners, ct scanners, etc. Effectively we store these on a huge San that costs lots and is inefficient. My idea is to utilise glusterfs as near line to storage all the millions of files these systems creates to a replicate them at file level and hopefully reduce the cost of storing them. The solutions tend to use nfs, iscsi and smb to interface to the storage. These
21:48 tg2 how big are the files
21:48 tg2 and it's write once read many?
21:48 tg2 how many TB
21:49 tg2 you can likely do it with glsuter
21:49 tg2 but you'd have to take your data usage into consideration when designing a cluster
21:50 tg2 if you're on a newer version of centricity it's on linux already anyway, so easy enough to integrate
21:51 wkf joined #gluster
21:52 mrdougalmon Worm will be good and a reason for selecting glusterfs
21:52 tg2 We did some work a few years ago with centricity 3.x and pacs when some clients were migrating from 2.x on windows to 3.x on linux.  You can probably even run it with native gluster client on the clients that are reading the data which would probably be better, NFS could be used for fallback for legacy connections
21:53 mrdougalmon @tg2 that's the same PACS! We are looking at about 80tb total
21:53 tg2 easy ya
21:53 tg2 do your devices connect to the pacs master which then writes to the dicom system?
21:53 tg2 or do they write directly like some of the larger scanners
21:54 tg2 you're using rhel or centos for centricity?
21:54 tg2 and v3 or v4?
21:55 mrdougalmon All image data goes via a central server so we can drop the glusterfs client on there
21:55 tg2 what does the data throughput look like
21:56 tg2 GB/day etc
21:56 tg2 read and write
21:56 mrdougalmon Looking at RHEL for the support aspect
21:56 mrdougalmon That I do not yet know
21:56 tg2 if you're looking at RHEL you can consider RHS which is gluster but with support contract
21:56 mrdougalmon Yeah, gotta work if I get run over ;-)
21:57 tg2 I think RHS has a test-drive thing now that you can try it in amazon
21:57 tg2 but I wans't able to get it to work, but i'm sure somebody at redhat would be willing to help
21:57 tg2 even on integration
21:57 tg2 recommend hardware etc
21:57 tg2 or you can go rogue and run gluster with your own hardware under it
21:57 mrdougalmon We are in talks with hp and redhat to deploy a POC was just wondering if anyone had done it before
21:58 tg2 its a pretty simple storage platform.
21:58 dgandhi joined #gluster
21:59 mrdougalmon Tbh i had not considered your suggestion of putting the client directly on the centricity  box! That will work so much better!
21:59 tg2 it uses posix
22:00 mrdougalmon So no major gotcha 's?
22:00 tg2 IIRC most of the dicom standard is per-file and doesn't involve any special attributes or block-based magic
22:00 tg2 but the obvious answer is to test
22:01 tg2 I don't see why it wouldn't work, even if you're using NFS, there should be no lock contention issues since all requests are handled by the same "client" ie: the central server
22:02 mrdougalmon Cool beans. Many thanks and yes we will be testing it prior to go live!
22:02 tg2 I'm sure somebody in here could refer you to somebody at RHS to take the handoff and make it a bit smoother
22:03 tg2 If you need a consult on hardware let me know
22:08 mrdougalmon Will do, thanks again for your help and suggestions. Feeling better about doing it now.
22:14 alpha01 joined #gluster
22:18 roost joined #gluster
22:25 lexi2_ joined #gluster
22:32 badone_ joined #gluster
22:44 sage joined #gluster
22:47 Roland- joined #gluster
22:48 Roland- oh hi
22:48 Roland- question, I need a solution to backup some .imgs on write without affecting host performance. I don't mind data being written back in background
22:48 Roland- can glusterfs do this
22:49 Roland- The behavior I am looking for is to store files locally first and then sync the content to the
22:49 Roland- second node in the background.
22:50 Roland- ah geo-replication
22:57 JoeJulian Yeah, geo-replication could do that.
22:58 Roland- Might be a huge issues though as I am planning to store qcow2 images
22:58 Roland- as far as I see geo-replication does incremental
22:58 JoeJulian It transfers incremental changes on a timer using rsync.
22:59 Roland- I see but since we are talking about let's say an 50GB file
22:59 Roland- that has changed 1 bit...
22:59 Roland- it will transfer the whole file? can it do copy on write
23:00 JoeJulian rsync only transfers chunks that have changed.
23:00 Roland- I will check
23:01 JoeJulian If you don't need clustered storage, https://github.com/hollow/inosync would do what you're asking.
23:01 dbruhn joined #gluster
23:01 Roland- yes that is good however I need something at block level
23:02 Roland- as these are virtualization images, seen as single files
23:02 Roland- sometimes the contents inside that file are encrypted
23:02 JoeJulian why do you think that?
23:02 Roland- because I know what I have there
23:02 JoeJulian "I need something at block level"
23:02 JoeJulian I know what a vm image is.
23:03 chirino joined #gluster
23:03 Roland- inosync is like lsyncd which I have
23:03 JoeJulian try it. rsync a 50g file, change 1 byte, then rsync it again and look at the data transfer.
23:03 Roland- will do
23:06 wkf joined #gluster
23:07 Roland- thank you
23:19 MugginsM joined #gluster
23:22 o5k joined #gluster
23:36 nstageberg joined #gluster
23:36 nstageberg left #gluster
23:38 purpleidea joined #gluster
23:54 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary