Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:24 PeterA how do we deal with thousands of heal-failed and split-brain entries?
00:25 PeterA normally restart glusterfs-service helps but not this time :(
00:31 RicardoSSP joined #gluster
00:40 nage joined #gluster
00:43 nage joined #gluster
00:54 David_H_Smith joined #gluster
01:02 David_H_Smith joined #gluster
01:13 bala joined #gluster
01:20 natgeorg joined #gluster
01:21 mator_ joined #gluster
01:21 jaroug_ joined #gluster
01:21 cyberbootje1 joined #gluster
01:21 foster_ joined #gluster
01:21 prasanth|afk joined #gluster
01:21 msvbhat_ joined #gluster
01:22 tomased joined #gluster
01:22 partner joined #gluster
01:22 radez_g0n3 joined #gluster
01:23 kodapa joined #gluster
01:23 bunni_ joined #gluster
01:23 bala joined #gluster
01:29 David_H_Smith joined #gluster
01:36 hightower4 joined #gluster
01:36 diegows joined #gluster
01:40 partner joined #gluster
01:55 David_H_Smith joined #gluster
01:58 necrogami joined #gluster
02:13 plarsen joined #gluster
02:36 anoopcs joined #gluster
03:05 necrogami joined #gluster
03:11 nishanth joined #gluster
03:33 anoopcs joined #gluster
03:36 necrogami joined #gluster
03:58 badone joined #gluster
04:00 bala joined #gluster
04:01 theron joined #gluster
04:06 necrogami joined #gluster
04:07 meghanam joined #gluster
04:07 meghanam_ joined #gluster
04:19 bala1 joined #gluster
04:34 necrogami joined #gluster
05:07 bala joined #gluster
05:12 hagarth joined #gluster
05:12 haomaiwa_ joined #gluster
05:31 j2b joined #gluster
05:32 j2b Hi guys! I want to understand the operations of cluster file system, if there are concurent edits. (2x web heads, using the same changing files on glusterfs mount).
05:32 j2b As to my tests and understanding, recent changes overwrite prior ones, if file was opened on both web heads at a time.
05:33 j2b Is this intended operation, or it´s kind of configuration need to be tweaked, to merge all changes into edited file?
05:33 j2b Would appreciate some explanation...
05:38 kshlm joined #gluster
05:59 lalatenduM joined #gluster
06:23 samsaffron___ joined #gluster
06:31 frankS2 joined #gluster
06:41 samsaffron___ joined #gluster
06:43 frankS2 joined #gluster
07:12 David_H_Smith joined #gluster
07:13 David_H_Smith joined #gluster
07:29 azar joined #gluster
07:46 meghanam joined #gluster
07:46 meghanam_ joined #gluster
08:08 meghanam_ joined #gluster
08:10 toecutter joined #gluster
08:10 meghanam joined #gluster
08:26 mariusp joined #gluster
08:27 ProT-0-TypE joined #gluster
08:38 meghanam joined #gluster
08:38 meghanam_ joined #gluster
08:46 keds joined #gluster
08:47 rotbeard joined #gluster
09:49 SOLDIERz joined #gluster
09:58 ekuric joined #gluster
10:45 anoopcs joined #gluster
11:10 rafi1 joined #gluster
11:27 ProT-O-TypE joined #gluster
11:35 haomaiwa_ joined #gluster
11:37 cultavix joined #gluster
11:59 cfeller joined #gluster
12:09 ekuric left #gluster
12:25 deniszh joined #gluster
12:36 LebedevRI joined #gluster
12:49 diegows joined #gluster
12:49 hagarth joined #gluster
12:51 Pupeno joined #gluster
12:59 Pupeno_ joined #gluster
13:02 deniszh joined #gluster
13:06 haomai___ joined #gluster
13:20 haomaiwa_ joined #gluster
13:23 deniszh joined #gluster
14:06 mariusp joined #gluster
14:22 bala joined #gluster
14:28 mariusp joined #gluster
14:34 mariusp joined #gluster
14:41 nshaikh joined #gluster
14:48 theron joined #gluster
15:03 oxidane joined #gluster
15:09 soumya__ joined #gluster
15:13 bala joined #gluster
15:34 j2b Still looking for discussion on concurrent write solutions on GlusterFS, anybody?
15:41 rwheeler joined #gluster
15:44 bala joined #gluster
15:46 n-st joined #gluster
15:57 elico joined #gluster
15:58 mariusp joined #gluster
16:02 Slashman joined #gluster
16:02 haomaiwa_ joined #gluster
16:09 SOLDIERz joined #gluster
16:10 mariusp joined #gluster
16:26 mariusp joined #gluster
16:45 shruti joined #gluster
16:46 shruti left #gluster
16:48 7GHAAKH1Z joined #gluster
17:18 pradeepto joined #gluster
17:20 mariusp joined #gluster
17:30 mariusp joined #gluster
17:50 mariusp joined #gluster
18:01 theron joined #gluster
18:05 dataio joined #gluster
18:05 JoeJulian @later tell j2b regarding your question of concurrent writes, glusterfs works according to the posix standard. If you don't want concurrency problems, use locks just as you would on a local filesystem to prevent the same problem.
18:05 glusterbot JoeJulian: The operation succeeded.
18:05 dataio hi @ all
18:10 dataio I am wondering. Is it possible to reinstall a brick when the cluster only have one brick in the volume? I mean can i just reinstall the brick and readd it to the volume ? without damage my data?
18:11 mariusp joined #gluster
18:56 coredump joined #gluster
19:01 toecutter joined #gluster
19:01 theron joined #gluster
19:27 nshaikh joined #gluster
19:47 mariusp joined #gluster
20:18 plarsen joined #gluster
20:23 rakkaus_ joined #gluster
20:25 rakkaus_ Hi guys! I have a question for gluster experts, I have a system based on amazon with gluster as a storage, the problem is when I trying to clone git a git repo in mounted volume it takes huge amount of time it's 60 sec! for super small repo
20:26 rakkaus_ how you guys managing performance of a gluster?
20:26 rakkaus_ also when I launched some perf test which is doing the same with 50 threads
20:27 rakkaus_ it's going to be super slow
20:27 rakkaus_ please, any advice how to improve performance
20:27 rakkaus_ will be really helpful
20:44 glusterbot New news from newglusterbugs: [Bug 1161885] Possible file corruption on dispersed volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1161885> || [Bug 1161886] rename operation leads to core dump <https://bugzilla.redhat.com/show_bug.cgi?id=1161886>
20:48 mariusp joined #gluster
20:50 theron joined #gluster
21:02 pradeepto joined #gluster
21:15 mariusp joined #gluster
21:35 pradeepto joined #gluster
21:46 David_H_Smith joined #gluster
21:49 David_H__ joined #gluster
21:50 David_H__ joined #gluster
21:54 David_H_Smith joined #gluster
21:55 toecutter joined #gluster
22:15 glusterbot New news from newglusterbugs: [Bug 1161893] volume no longer available after update to 3.6.1 <https://bugzilla.redhat.com/show_bug.cgi?id=1161893>
22:25 uebera|| joined #gluster
22:36 theron joined #gluster
22:37 JoeJulian dataio: If you have only one brick in a volume, if you "reinstall" that brick, which I assume means re-formatting it, all the data on your volume will be lost because you formatted its only brick.
22:38 JoeJulian rakkaus_: Are your bricks all in the same availability zone?
22:39 JoeJulian rakkaus_: if not, that's why it's slow. All latency between replicas will be amplified.
22:40 rakkaus_ yes same zone
22:40 rakkaus_ even on local env
22:40 rakkaus_ it's the same...
22:42 rakkaus_ ofc on local env it's a bit faster but still very slow under small load (50 threads to clone small repo)
22:44 rakkaus_ on prod env with replication it's extremely slow
22:44 rakkaus_ it's looks like miss configuration I can't believe that it's how gluster works...
22:53 T0aD joined #gluster
22:55 mariusp joined #gluster
23:07 ktogias joined #gluster
23:07 ktogias Hi all
23:10 ktogias After an update regarding glusterfs and restart glusterd on bricks... I have stuck for severl minutes with "Locking failed on  ...." for 2 of the 6 bricks of my 2x2  replicated distributed volume. I am also unable to acces the volume by mounting it. It gives io error. Machines that have it mounted can access files though...
23:11 ktogias Is there something I sould do? Just wait for the two nodes to unlock?
23:11 ktogias Any hint... about this ackward situation?
23:16 ktogias the whole problem started when just after the udate of first bricks, the healing process automaticaly initiated, before other briks finish updating...
23:19 lkoranda joined #gluster
23:51 mariusp joined #gluster
23:53 russoisraeli joined #gluster
23:54 russoisraeli Hello guys. I have a quick question. Is it possible to convert a replica of 2 into a striped replica. I now have two servers which I'd like to use as replicas, but in the near future I'd like to add 2 servers to the volume, create striped replica
23:55 russoisraeli I want a stripe as opposed to distributed, because I am planning to store VM images on the volume, which are quiet large and should have random seeks
23:56 diegows joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary