Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-12-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 delhage joined #gluster
00:13 zhangjn joined #gluster
00:16 plarsen joined #gluster
00:16 plarsen joined #gluster
00:17 plarsen joined #gluster
00:21 hagarth joined #gluster
00:40 delhage joined #gluster
00:55 EinstCrazy joined #gluster
01:01 zhangjn joined #gluster
01:03 cjellick joined #gluster
01:10 dlambrig joined #gluster
01:12 zoldar Are options suggested under http://www.gluster.org/community/documenta​tion/index.php/Virt-store-usecase#Tunables safe to apply in case of kvm qcow2 images used through libgfapi? (cluster is in replicate, replica 2 setup)
01:21 JoeJulian Yes
01:30 gildub joined #gluster
01:32 Lee1092 joined #gluster
01:35 coredump joined #gluster
01:38 mlncn joined #gluster
01:42 RedW joined #gluster
01:53 haomaiwa_ joined #gluster
01:57 julim joined #gluster
02:01 haomaiwa_ joined #gluster
02:11 r4inmak3r joined #gluster
02:15 zzzbrett joined #gluster
02:17 suliba joined #gluster
02:25 nathwill joined #gluster
02:34 spalai joined #gluster
02:38 F2Knight joined #gluster
02:45 dgandhi joined #gluster
02:58 bharata-rao joined #gluster
03:01 haomaiwang joined #gluster
03:05 xecycle joined #gluster
03:08 xecycle Hi; having a problem: du a-quota-limited-dir says 60T is used, df that-dir says 80T is used, df root-of-volume says 61T is used.  Server running 3.6.5-1.el6.x86_64.
03:08 partner joined #gluster
03:19 haomai___ joined #gluster
03:30 JoeJulian xecycle: what's du --apparent show?
03:32 sakshi joined #gluster
03:38 jrm16020 joined #gluster
03:45 atinm joined #gluster
03:51 itisravi joined #gluster
03:56 n0b0dyh3r3 joined #gluster
03:57 Park joined #gluster
03:58 nbalacha joined #gluster
04:03 nehar joined #gluster
04:05 zhangjn joined #gluster
04:07 overclk joined #gluster
04:07 xecycle JoeJulian: --apparent also show 60T
04:07 glusterbot xecycle: JoeJulian's karma is now 22
04:07 xecycle oops, is this a minus?  lol
04:09 kanagaraj joined #gluster
04:09 RameshN joined #gluster
04:16 atalur joined #gluster
04:17 zhangjn joined #gluster
04:19 dusmant joined #gluster
04:22 bharata_ joined #gluster
04:23 shubhendu joined #gluster
04:23 sripathi1 joined #gluster
04:25 ramteid joined #gluster
04:27 gem joined #gluster
04:38 raghu joined #gluster
04:40 badone joined #gluster
04:45 nehar joined #gluster
04:45 aravindavk joined #gluster
04:49 jiffin joined #gluster
04:49 RameshN joined #gluster
04:53 ashiq joined #gluster
04:56 hgowtham joined #gluster
04:56 Manikandan joined #gluster
05:08 ppai joined #gluster
05:11 pppp joined #gluster
05:13 F2Knight joined #gluster
05:14 deepakcs joined #gluster
05:17 kdhananjay joined #gluster
05:21 hgowtham joined #gluster
05:29 calavera joined #gluster
05:30 badone joined #gluster
05:36 Apeksha joined #gluster
05:37 kshlm joined #gluster
05:38 ndarshan joined #gluster
05:45 javi404 joined #gluster
05:46 F2Knight joined #gluster
05:52 ramky joined #gluster
05:55 aravindavk joined #gluster
06:06 spalai joined #gluster
06:07 aravindavk joined #gluster
06:09 Humble joined #gluster
06:16 harish joined #gluster
06:21 rjoseph joined #gluster
06:23 javi404 joined #gluster
06:32 SOLDIERz joined #gluster
06:33 vmallika joined #gluster
06:35 rafi joined #gluster
06:54 mobaer joined #gluster
07:03 anil joined #gluster
07:17 mhulsman joined #gluster
07:19 Norky joined #gluster
07:21 jtux joined #gluster
07:36 overclk joined #gluster
07:42 nis joined #gluster
07:44 nis Hi, I am running ubuntu 14.04 LTS with glusterfs 3.4.2-1 and I have tons of messages in gluster log: [2015-12-04 07:20:50.770431] W [fuse-bridge.c:2167:fuse_writev_cbk] 0-glusterfs-fuse: 4041217838: WRITE => -1 (Bad file descriptor)
07:45 nis can anyone please help in understanding how it can be resolved ?
07:46 nis gluster if filling my root filesystem fast  and writes 2MB/sec of errors and I don't understand why and how to resolve that
07:47 Park joined #gluster
07:54 JoeJulian nis: I think I remember that error back in those old days. That's a really old and critically buggy version.
07:54 nis Hi JoeJulian , It is an old message I guess
07:55 nis and it is a big problem since I run this gluster setup in production
07:55 JoeJulian I would upgrade. In the mean time, you can increase the log level above warning.
07:55 nis JoeJulian: do you remember how it can be resolved? was there a bug in Bugzilla ?
07:56 calavera joined #gluster
07:56 nis JoeJulian: how can I increase the log level on glusterfs ?
07:57 nis JoeJulian: gluster volume <name> set diagnostics.client-log-level ERROR ??
08:05 JoeJulian yes
08:05 nis JoeJulian: thanks , I set the client-log-level to ERROR and it stopped the flud
08:06 nis JoeJulian: Can you point me to the bug description, I would like to know if it was fixed in later versions like 3.6.x ?
08:06 JoeJulian I strongly suggest upgrading asap. That specific version has a multitude of (imho) disasterous bugs.
08:06 nis JoeJulian: Your help is most appreciated as usual
08:07 JoeJulian iirc, that was fixed as early as 3.4.3, so yes. It's still fixed in 3.6.
08:08 nis JoeJulian: many thanks
08:10 overclk joined #gluster
08:19 [Enrico] joined #gluster
08:22 deniszh joined #gluster
08:27 itisravi_ joined #gluster
08:34 itisravi joined #gluster
08:48 rafi joined #gluster
08:51 skoduri joined #gluster
08:54 atalur joined #gluster
09:21 ivan_rossi joined #gluster
09:26 Slashman joined #gluster
09:41 klaxa joined #gluster
09:42 ctria joined #gluster
09:44 mhulsman joined #gluster
09:48 Norky joined #gluster
09:54 skoduri_ joined #gluster
09:56 rjoseph joined #gluster
09:57 jwd joined #gluster
10:00 jwaibel joined #gluster
10:02 fsimonce joined #gluster
10:04 arcolife joined #gluster
10:06 arcolife joined #gluster
10:23 calavera joined #gluster
10:36 hexasoft joined #gluster
10:37 hexasoft Hello.
10:37 glusterbot hexasoft: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:38 hexasoft thanks, bot, but "polite people say HELO".
10:39 hexasoft My problem: I recently tried to setup full encrypted communications with glusterfs (both client/server and server/server). It works fine but:
10:40 JarJarBinks joined #gluster
10:41 hexasoft 1. performances are very impacted. Something like 25% slower. Is it known or is it me that made something wrong?
10:41 hexasoft 2. with encryption activated servers seems to have problem to re-join after a hard-reboot
10:43 hexasoft the server that reboots can see all the pool (pool list says that they are connected) but the other one (I use 2 servers for basic replication) still claims that the other is disconnected
10:44 JarJarBinks Hi. I am very very new to glusterfs. I have spend two days reading how to solve split-brain issues. My problem is that the gluster volume heal gv1 info split-brain lists only '/' as the cause. There are 3 nodes in the cluster, one had a power problem and was shutdown for 2 days. The other two lived on, but reports 1024 possible splitbrains on '/'. All guides found says that I should determine which fi
10:44 hexasoft I'm using debian 8 (64bit) with last stable version from gluster.org repo
10:44 JarJarBinks le to delete, and delete the unwanted files. But I cannot delete /
10:45 JarJarBinks running on centos 6.5
10:46 rjoseph joined #gluster
10:47 mhulsman joined #gluster
10:56 Park joined #gluster
10:56 mhulsman joined #gluster
10:58 skoduri_ joined #gluster
11:02 JVieira joined #gluster
11:02 JVieira Hi guys
11:02 JVieira wonder if anybody can help
11:02 JVieira how can i achieve iscsi 2 node ha with gluster?
11:03 ndevos hexasoft: I'm not sure, but kshlm and jdarcy (both seem to be missing atm) know most about the ssl encryption, probably best to send an email to gluster-users@gluster.org
11:03 hexasoft ndevos: ok thanks. I will subscribe and send a more detailed report.
11:04 ndevos JarJarBinks: a split-brain on a directory can happen when files have been added/removed or ownership/permissions of the directory have changed
11:04 JVieira can anybody point me in the right direction? Any tutorials i can follow?
11:05 JVieira Active / Passive or Active Active env
11:05 ndevos JarJarBinks: http://gluster.readthedocs.org/en/l​atest/Troubleshooting/split-brain/ should help you on your way
11:05 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.org)
11:07 ndevos JVieira: I would start with http://gluster.readthedocs.org/en/latest/​Administrator%20Guide/GlusterFS%20iSCSI/ and make the service+ip fail-over with pacemaker
11:07 glusterbot Title: GlusterFS iSCSI - Gluster Docs (at gluster.readthedocs.org)
11:07 ramky joined #gluster
11:08 ndevos JVieira: active/passive will probably be the easiest, once that works fine, you could try active/active
11:09 JVieira ndevos thank you so much i will look into this... im new to the gluster and im still trying to understand how i can accomplish my goal :)
11:10 badone joined #gluster
11:13 bavila joined #gluster
11:25 atalur joined #gluster
11:25 ppai joined #gluster
11:33 hexasoft by the way a more general question: in pure replica configuration (1x2) how does a client choose the server to communicate with for reading files?
11:33 nangthang joined #gluster
11:33 hexasoft it looks like it is "the last one used until it stops to work". I am right ?
11:35 DV joined #gluster
11:36 hexasoft I'm searching if it exists a way to have a "nearest" choice rather (in term of network latency): my machines are virtuals, in 2 different buildings. I would like a client in building A to use server in the same building (when available of course). But machines can migrate.
11:37 jiffin itisravi: ^^
11:39 klaxa joined #gluster
11:45 itisravi hexasoft: The default is based on a hash on the file name but you can explicitly set the `cluster.read-subvolume` option
11:56 itisravi joined #gluster
11:58 nangthang joined #gluster
12:03 mlncn joined #gluster
12:07 jwd joined #gluster
12:23 kdhananjay joined #gluster
12:26 nehar joined #gluster
12:31 Manikandan joined #gluster
12:31 DV joined #gluster
12:31 itisravi joined #gluster
12:34 nbalacha joined #gluster
12:38 ira joined #gluster
12:39 overclk joined #gluster
12:41 EinstCrazy joined #gluster
12:41 JarJarBinks ndevos: thank you
13:09 rjoseph joined #gluster
13:28 d0nn1e joined #gluster
13:31 plarsen joined #gluster
13:32 Pupeno joined #gluster
13:32 Pupeno joined #gluster
13:40 JarJarBinks ndevos: there is just one problem when following http://gluster.readthedocs.org/en/l​atest/Troubleshooting/split-brain/
13:40 glusterbot Title: Split Brain - Gluster Docs (at gluster.readthedocs.org)
13:41 JarJarBinks the getfattr strips leading / from file name, but the entry reported from volume heal <vol> info is /
13:42 ndevos JarJarBinks: you need to run the getfattr command on the storage servers and with the path to the bricks
13:42 JarJarBinks ok
13:42 JarJarBinks thx
13:43 haomaiwa_ joined #gluster
13:44 haomaiwang joined #gluster
13:45 haomaiwang joined #gluster
13:46 Manikandan joined #gluster
13:46 haomaiwang joined #gluster
13:47 haomaiwang joined #gluster
13:48 haomaiwang joined #gluster
13:49 haomaiwa_ joined #gluster
13:50 chirino joined #gluster
13:50 18VAAFH1O joined #gluster
13:51 haomaiwa_ joined #gluster
13:53 haomaiwa_ joined #gluster
13:54 haomaiwang joined #gluster
13:55 haomaiwa_ joined #gluster
13:57 haomaiwang joined #gluster
13:59 rafi1 joined #gluster
14:00 haomaiwa_ joined #gluster
14:01 haomaiwang joined #gluster
14:02 5EXAAEM1J joined #gluster
14:02 mobaer joined #gluster
14:03 haomaiwang joined #gluster
14:04 5EXAAEM4N joined #gluster
14:05 5EXAAEM5X joined #gluster
14:06 haomaiwa_ joined #gluster
14:07 haomaiwa_ joined #gluster
14:08 haomaiwang joined #gluster
14:09 haomaiwa_ joined #gluster
14:10 haomaiwa_ joined #gluster
14:11 haomaiwang joined #gluster
14:11 Pupeno joined #gluster
14:16 jwaibel joined #gluster
14:19 skoduri joined #gluster
14:21 Pupeno joined #gluster
14:22 shyam joined #gluster
14:22 B21956 joined #gluster
14:24 siel joined #gluster
14:30 unclemarc joined #gluster
14:30 ayma joined #gluster
14:35 coredump joined #gluster
14:37 Pupeno joined #gluster
14:39 billputer joined #gluster
14:41 B21956 joined #gluster
14:44 Pupeno joined #gluster
14:50 theron joined #gluster
14:53 jmarley joined #gluster
15:00 anil joined #gluster
15:01 ayma joined #gluster
15:08 kovshenin joined #gluster
15:13 hexasoft left #gluster
15:15 squizzi_ joined #gluster
15:15 bennyturns joined #gluster
15:20 shyam joined #gluster
15:26 muneerse joined #gluster
15:37 Pupeno joined #gluster
15:42 skylar joined #gluster
15:42 B21956 joined #gluster
15:44 maserati joined #gluster
15:45 jwaibel joined #gluster
15:47 Pupeno joined #gluster
15:50 shyam joined #gluster
15:51 jmarley joined #gluster
15:55 cjellick joined #gluster
15:55 Pupeno joined #gluster
16:01 kovshenin joined #gluster
16:04 Pupeno joined #gluster
16:13 Pupeno joined #gluster
16:15 theron joined #gluster
16:16 coredump joined #gluster
16:17 kkeithley joined #gluster
16:19 shyam joined #gluster
16:22 Pupeno joined #gluster
16:27 klaas joined #gluster
16:27 lord4163 joined #gluster
16:36 mobaer left #gluster
16:41 ocramuias joined #gluster
16:41 ocramuias Hello
16:41 glusterbot ocramuias: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:41 shyam joined #gluster
16:42 ocramuias when i use remove-brick all data in this brick are moved on another brick ?
16:54 spalai ocramuias: correct
16:57 jmarley joined #gluster
16:59 Pupeno joined #gluster
17:00 Pupeno_ joined #gluster
17:01 ocramuias nice but with remove-brick without other command ? i put remove-brick and migrate date immediatly ?
17:02 rideh joined #gluster
17:15 coredump joined #gluster
17:35 ivan_rossi left #gluster
17:39 dblack joined #gluster
17:42 cjellick joined #gluster
17:44 Rapture joined #gluster
17:45 jbrooks joined #gluster
17:47 rwheeler joined #gluster
17:50 bowhunter joined #gluster
17:51 nathwill joined #gluster
17:57 Amun_Ra joined #gluster
17:57 shaunm joined #gluster
18:01 ninkotech joined #gluster
18:02 ninkotech_ joined #gluster
18:03 Pupeno joined #gluster
18:07 diegows joined #gluster
18:07 bavila left #gluster
18:12 skylar joined #gluster
18:16 skylar1 joined #gluster
18:18 mhulsman joined #gluster
18:30 ocramuias but next the remove-brick is necessary start rebalance ?
18:36 ayma joined #gluster
18:39 jbrooks joined #gluster
18:42 ghenry joined #gluster
18:42 ghenry joined #gluster
19:05 inhumantsar i have a work flow like so: clientA writes fileA (0.5-2GB) to locA in gluster. clientB then reads fileA, applies some processing, and then writes the result to locB/fileA, where it might be picked by a clientC applying more processing and written to locC. there are potentially dozens of clients doing each of these tasks simultaneously. this may not be an
19:05 inhumantsar ideal workload for gluster, but would it cause gluster to choke and die, or merely underperform?
19:16 JesperA_ joined #gluster
19:31 mhulsman joined #gluster
19:32 theron joined #gluster
19:36 theron_ joined #gluster
19:39 cjellick joined #gluster
19:41 maserati left #gluster
19:52 mlncn joined #gluster
19:56 kovshenin joined #gluster
19:57 dblack joined #gluster
20:03 Philambdo joined #gluster
20:03 kovshenin joined #gluster
20:12 lpabon joined #gluster
20:25 cjellick joined #gluster
20:34 amye joined #gluster
20:44 dblack joined #gluster
21:03 mhulsman joined #gluster
21:04 diegows joined #gluster
21:32 theron joined #gluster
21:35 JesperA My googeling skills fails me again, cant find any good articles about performance comparisons between "traditional" gluster volumes and disperse/erasure encoded volumes. Anyone?
21:49 Pupeno joined #gluster
21:54 nathwill joined #gluster
22:23 JoeJulian inhumantsar: That type of workload is not uncommon. CERN uses gluster like that.
22:23 JoeJulian JesperA: I haven't seen that yet either. I suspect nobody's done it yet.
22:25 JesperA JoeJulian ok thanks, seems like i have to either wait it or go on my own adventure then.
22:26 JoeJulian If you do, please publish your results.
22:27 JesperA Yeah, i am pretty terrible at writing any kind of articles but surely i can post some number comparisons atleast
22:37 tree333 joined #gluster
22:43 theron joined #gluster
22:51 inhumantsar JoeJulian: We've been having a lot of trouble with stability, especially of the fuse client, with this sort of workload
22:51 inhumantsar Do you know a place which outlines some of the pain points encountered someone in a similar scenario?
22:52 inhumantsar **encountered by someone in a similar scenario?
22:52 F2Knight joined #gluster
22:59 Pupeno joined #gluster
22:59 Pupeno joined #gluster
23:42 mlncn joined #gluster
23:54 rideh joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary