Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 Pupeno joined #gluster
00:29 fandi joined #gluster
00:39 msmith_ joined #gluster
01:40 msmith_ joined #gluster
01:43 harish joined #gluster
01:49 nangthang joined #gluster
02:04 pdrakeweb joined #gluster
02:21 kanagaraj joined #gluster
02:32 plarsen joined #gluster
02:47 RameshN joined #gluster
02:55 RameshN joined #gluster
03:10 bala joined #gluster
03:11 msmith_ joined #gluster
03:15 hagarth joined #gluster
03:24 lalatenduM joined #gluster
03:27 shubhendu joined #gluster
03:33 kanagaraj joined #gluster
03:35 atinmu joined #gluster
03:40 bala joined #gluster
03:41 itisravi joined #gluster
03:42 bharata-rao joined #gluster
03:48 hchiramm joined #gluster
03:50 msmith_ joined #gluster
03:57 kanagaraj_ joined #gluster
04:10 kumar joined #gluster
04:13 spandit joined #gluster
04:14 suman_d_ joined #gluster
04:15 RameshN_ joined #gluster
04:15 ppai joined #gluster
04:16 prg3 joined #gluster
04:17 nbalacha joined #gluster
04:21 crashmag joined #gluster
04:21 kanagaraj joined #gluster
04:26 Dave2_ joined #gluster
04:30 saurabh joined #gluster
04:33 elico joined #gluster
04:42 suman_d_ joined #gluster
04:42 fandi joined #gluster
04:43 kdhananjay joined #gluster
04:54 georgeh-LT2 joined #gluster
05:02 ndarshan joined #gluster
05:13 prasanth_ joined #gluster
05:17 rafi1 joined #gluster
05:34 msmith__ joined #gluster
06:02 karnan joined #gluster
06:08 nshaikh joined #gluster
06:08 anil joined #gluster
06:16 hagarth joined #gluster
06:17 maveric_amitc_ joined #gluster
06:19 atalur joined #gluster
06:23 TvL2386 joined #gluster
06:25 Anuradha joined #gluster
06:28 suman_d_ joined #gluster
06:44 hchiramm_ joined #gluster
06:49 nishanth joined #gluster
06:55 overclk joined #gluster
06:57 ppai joined #gluster
06:58 glusterbot News from newglusterbugs: [Bug 1178079] [AFR] getfattr on fuse mount gives error : Software caused connection abort <https://bugzilla.redhat.com/show_bug.cgi?id=1178079>
06:59 zutto Could someone shed a light on the stat issue in glusterfs? I have volume with 6 bricks, and at the root folder, ls fails, and inside any of the folders in the volume it actually works
06:59 zutto fails as in just hangs
07:05 zerick joined #gluster
07:05 soumya_ joined #gluster
07:12 RameshN joined #gluster
07:12 RameshN_ joined #gluster
07:21 rgustafs joined #gluster
07:23 maveric_amitc_ joined #gluster
07:25 jtux joined #gluster
07:27 soumya_ joined #gluster
07:34 prasanth_ joined #gluster
07:38 ppai joined #gluster
07:44 soumya joined #gluster
07:48 maveric_amitc_ joined #gluster
08:04 sripathi joined #gluster
08:04 _benj_` joined #gluster
08:04 msmith_ joined #gluster
08:08 _benj_` Hello Guys, i'm new with gluster. I'm trying to recover from a voluntary split-brain but am not succeeding. whenever i do a 'gluster volume heal testvol info split-brain', i see 'Number of entries: x' incrementing each minute and for each peer, a new line with '2015-01-02 05:47:56 /'. I tried to do a 'gluster volume heal testvol full' but didn't help, the item is still on the list of items to be healed. A
08:08 _benj_` m I missing something ?
08:16 kanagaraj joined #gluster
08:17 sripathi left #gluster
08:18 JoeJulian zutto: are there a lot of files in the volume root?
08:19 zutto at the moment i dont know the exact number, but 20-30 files, ~25 folders
08:19 zutto so not that many
08:19 JoeJulian _benj_`: You have to fix split-brain manually. The split "/" annoys me because I still think it should heal automatically.
08:19 JoeJulian On the bricks, read the ,,(extended attributes)
08:19 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
08:20 _benj_` thanks JoeJulian, i'll have a look
08:20 JoeJulian Look for the trusted.gluster.afr entries. Pick one and replace them with all zeros.
08:21 JoeJulian Pick one, as in pick one *brick* not one entry. Replace all entries on the brick you choose.
08:21 JoeJulian Good luck. I'm heading to bed.
08:22 _benj_` gn8
08:26 fsimonce joined #gluster
08:26 itisravi joined #gluster
08:27 _benj_` JoeJulian: FYI - worked, now Number of entries: 0. Thanks for your help.
08:37 rjoseph joined #gluster
08:43 meghanam joined #gluster
08:54 ppai joined #gluster
08:54 iPancreas joined #gluster
09:00 ghenry joined #gluster
09:00 ghenry joined #gluster
09:02 kovshenin joined #gluster
09:05 Norky joined #gluster
09:12 maveric_amitc_ joined #gluster
09:14 kanagaraj joined #gluster
09:15 hajoucha joined #gluster
09:21 Anuradha joined #gluster
09:22 hajoucha Hi, in gluster 3.6.1 'gluster vol info' gives "No volumes present", however ' gluster volume create gv0 disperse 3 redundancy 1 transport rdma array1-ib:/exports/gv0/brick1 array2-ib:/exports/gv0/brick1 array3-ib:/exports/gv0/brick1' gives  'volume create: gv0: failed: /exports/gv0/brick1 is already part of a volume'
09:23 hajoucha what is the best way to remove all bricks and start from the beginning again? It is a test setup, so I do not have any data there.
09:38 nkirstendt joined #gluster
09:45 nkirstendt Hi guys, We are running a small web service where we store binary files of around 50KB to 3-4MB each. We have about 16TB of storage on one machine, but soon we will need more space. My first first question is: "Does GlusterFS work well with files of this size?" and the second question is: "Is it normally to setup GlusterFS with one node with one brick for now, and when we need more space to just add more nodes+bricks"? All sto
09:45 nkirstendt only to distribute and scale out. Thanks!
09:47 rgustafs joined #gluster
09:53 msmith_ joined #gluster
10:14 LebedevRI joined #gluster
10:15 _shaps_ joined #gluster
10:15 RameshN joined #gluster
10:17 kanagaraj joined #gluster
10:52 harish joined #gluster
10:59 RameshN_ joined #gluster
11:05 DV joined #gluster
11:10 [o__o] joined #gluster
11:24 Norky joined #gluster
11:42 msmith_ joined #gluster
11:48 kkeithley joined #gluster
12:21 edwardm61 joined #gluster
12:22 bala joined #gluster
12:25 shellyb joined #gluster
12:26 shellyb Hi guys, is it possible to move a brick from one node to another given that the cluster is used only for distribution?
12:28 partner 22:43 <@JoeJulian> Oh, replace-brick... I wouldn't.
12:28 partner i pretty much asked the same question couple of days ago and that was the response
12:29 partner did a bit googling and it seems quite dangerous operation
12:29 partner i think the proper way is to add a new brick and then remove the old ie. data gets migrated away from the old one but have not tested that much at all
12:29 partner https://bugzilla.redhat.com/show_bug.cgi?id=1039954
12:30 glusterbot Bug 1039954: medium, unspecified, ---, kaushal, CLOSED CURRENTRELEASE, replace-brick command should warn it is broken
12:30 zutto joined #gluster
12:32 shellyb @partner, in 'distribute' I guess I cannot remove a brick. Where the data will go? Yes, I can add a new brick but how can I sync it with the one that is going to be shut down?
12:34 partner shellyb: basically as you add a new brick your capacity will increase. if its more space than on the old brick there is enough room to get those file into new brick
12:35 shellyb partner: so there should be a way to instruct glusterfs to move the data from the brick that is going to be shut down because as far as I know I cannot do this manually.
12:35 shellyb GlusterFS will put the files wherever see them fit.
12:36 partner it knows about all the bricks in the volume and
12:37 partner https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_managing_volumes.md#shrinking-volumes
12:38 partner for some reason that documentation still strongly explains the replace-brick but what i have understood it is evil and should not be used
12:39 partner but i guess we better wait for the busy-pros to come online as i am no longer at all sure how these functions have changed over the versions :(
12:39 shellyb partner: thanks for the notes
12:41 partner https://lists.gnu.org/archive/html/gluster-devel/2012-10/msg00050.html - there is some old post about removal of that functionality
12:42 partner which explains the adding and removal of bricks and how data gets migrated away ie. "intelligent rebalance" that only targets to the brick you're removing
12:43 bala joined #gluster
12:44 rotbeard joined #gluster
12:46 shellyb partner: 10x ;)
12:49 partner in progress        253493.00
12:50 bene joined #gluster
12:55 calisto joined #gluster
13:02 partner interesting, rebelance status reports that.. however that's only 70 hours while my fix-layout has been running since dec 5..
13:02 partner some counter went over its limits?-)
13:08 social joined #gluster
13:14 social joined #gluster
13:28 calisto joined #gluster
13:29 msmith_ joined #gluster
13:34 tom[] is it ok to make an LVM snapshot of a brick in a replica volume and then mount the brick detached from the cluster and make a backup from it?
13:36 mrEriksson Why not just back up the actual gluster volume? Since the snapshot won't in any way guarantee data consistency anyways?
13:37 mrEriksson But I guess it would work. The docs describe reading files directly from the brick as being "OK but not supported in anyway"
13:38 tom[] mrEriksson: because the files in the volume are application state associated with a database
13:39 tom[] i can write a script that gets a db lock, makes the lvm snapshot, then begins a dump transaction, then releases the lock
13:39 mrEriksson True
13:40 tom[] then when the dump is finished, it should be consistent with the files in the cluster, and i can copy them both to a remote location
13:40 mrEriksson But why not use glusters snapshot feature instead? If you already have lvm set up to support snapshots
13:40 tom[] no reason, other than ignorance
13:40 mrEriksson Ah :)
13:41 mrEriksson I'm in no way an expert at this, but it seems safer to access the filesystem via gluster rather than directly via the bricks
13:41 tom[] mrEriksson: sure
13:42 mrEriksson (Though, fetching files from the bricks have saved me a couple of times when I started out with gluster :))
13:42 * tom[] reading about gluster snapshot
13:42 mrEriksson So how well does database-like usage perform and work for you on a gluster volume?
13:42 mrEriksson I mostly store qcows :)
13:43 tom[] the db data is not saved in the gluster
13:43 tom[] i use a galera cluster instead
13:43 nangthang joined #gluster
13:44 thangnn_ joined #gluster
13:44 mrEriksson Ah, never used that
13:44 tom[] and it's plenty fast for my transaction rate. i've got 100 to 1000x read to write ratio
13:44 tom[] so far so good
13:44 mrEriksson Most of our larger databases use db2
13:45 tom[] poor thing
13:45 * tom[] pets mrEriksson
13:45 mrEriksson Which is extremly stable, but also extremly painful to work with :-)
13:45 mrEriksson haha
13:45 mrEriksson Well, I don't really dislike it that much these days
13:46 tom[] you can get used to anything. so you should be careful what you get used to
13:46 mrEriksson Haha, true :)
13:46 mrEriksson The thing I like about db2 is that you deploy it, and then it just works
13:47 mrEriksson But I really don't like making changes to the db2 environments :-)
13:49 tom[] hm. my lvm setup doesn't use thin provisioning
13:50 mrEriksson Ah, right, lvm can do snapshots without thin provisioning too
13:51 tom[] still, it's worth investigating
13:57 tom[] i wonder if all the bricks in the repo cluster need to be tp lvm vols, or if it is sufficient that the brick on the node i'm taking the snapshot
13:57 virusuy joined #gluster
14:00 tom[] gluster requires thinly-provisioned lvm volumes
14:00 tom[] for snapshotting
14:01 tom[] or should i say, snapshooting
14:07 hchiramm joined #gluster
14:17 fandi joined #gluster
14:17 ekman- joined #gluster
14:20 jmarley joined #gluster
14:20 hagarth joined #gluster
14:22 shaunm joined #gluster
14:35 coredump joined #gluster
14:42 tom[] my gluster is 3.4.2 which has no volume snapshooting
14:43 shubhendu joined #gluster
14:45 georgeh-LT2_ joined #gluster
14:54 mrEriksson tom[]: No, that's a pretty recent feature
15:06 iPancreas joined #gluster
15:18 msmith_ joined #gluster
15:19 shubhendu_ joined #gluster
15:30 glusterbot News from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1163543>
15:31 sprachgenerator joined #gluster
15:40 kanagaraj joined #gluster
15:43 sprachgenerator left #gluster
15:51 ildefonso joined #gluster
15:55 suman_d_ joined #gluster
15:57 plarsen joined #gluster
15:59 bennyturns joined #gluster
16:03 _Bryan_ joined #gluster
16:11 roost_ joined #gluster
16:16 georgeh-LT2 joined #gluster
16:17 georgeh-LT2 joined #gluster
16:29 Pupeno joined #gluster
16:29 Pupeno joined #gluster
16:47 lalatenduM joined #gluster
16:49 msmith_ joined #gluster
16:57 msmith_ joined #gluster
17:02 vimal joined #gluster
17:09 merlink joined #gluster
17:23 gothos joined #gluster
17:23 kanagaraj joined #gluster
17:31 calisto joined #gluster
17:33 Pupeno joined #gluster
17:41 jobewan joined #gluster
17:59 ckotil joined #gluster
18:04 fandi joined #gluster
18:36 hagarth1 joined #gluster
19:59 edong23 joined #gluster
20:19 rjoseph joined #gluster
20:26 DV joined #gluster
21:06 msmith_ joined #gluster
21:13 jobewan joined #gluster
22:28 rdircio joined #gluster
22:28 rdircio hi
22:28 glusterbot rdircio: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:29 rdircio i'm trying to remove a brick from a 3 node setup
22:29 rdircio emoving bricks from replicate configuration is not allowed without reducing replica count explicitly.
22:31 rdircio tried gluster volume remove-brick volume1 aspireone:/gluster-storage
22:31 rdircio how do i "reduce the replica count explicitly"?
22:53 calum_ joined #gluster
22:55 coredump joined #gluster
23:16 _Bryan_ joined #gluster
23:29 merlink joined #gluster
23:46 SOLDIERz joined #gluster
23:49 JoeJulian rdircio: gluster volume remove-brick replica 2 volume1 aspireone:/gluster-storage
23:51 rdircio thank you :)

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary