Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-07-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 plarsen joined #gluster
00:05 dtrainor joined #gluster
00:17 Matthaeus joined #gluster
00:42 cjanbanan joined #gluster
00:50 gildub joined #gluster
00:51 _Bryan_ joined #gluster
01:18 edwardm61 joined #gluster
01:19 jbernard joined #gluster
01:20 jbernard found a typo in the quickstart guide (http://www.gluster.org/documen​tation/quickstart/index.html)
01:20 glusterbot Title: Quick Start Guide Gluster (at www.gluster.org)
01:20 Edddgy joined #gluster
01:20 jbernard last section "Test using the volume"
01:20 jbernard you'll need a directory for the mount command
01:22 jbernard cheers
01:22 jbernard left #gluster
01:30 gmcwhistler joined #gluster
01:32 skulker joined #gluster
01:34 skulker using glusterd 3.3.0 on CentOS 6, if I initiate a kernel panic on one of the bricks the glusterfs client hangs for about 2 minutes. Is there a setting to initiate a faster recovery to the healthy brick?
01:36 P0w3r3d joined #gluster
01:42 cjanbanan joined #gluster
01:47 harish joined #gluster
02:20 Edddgy joined #gluster
02:30 bala joined #gluster
02:42 decimoe joined #gluster
02:42 dencaval joined #gluster
02:42 igorwidl joined #gluster
02:42 cfeller joined #gluster
02:42 stigchristian joined #gluster
02:42 StarBeast joined #gluster
02:42 nullck joined #gluster
02:43 fim joined #gluster
02:45 lkoranda joined #gluster
02:51 ThatGraemeGuy joined #gluster
02:54 MacWinner joined #gluster
03:07 bala joined #gluster
03:08 kshlm joined #gluster
03:17 fim joined #gluster
03:17 Rydekull joined #gluster
03:20 jiqiren joined #gluster
03:20 anotheral joined #gluster
03:20 Intensity joined #gluster
03:20 bala joined #gluster
03:20 MacWinner joined #gluster
03:20 ThatGraemeGuy joined #gluster
03:20 StarBeast joined #gluster
03:20 igorwidl joined #gluster
03:20 dencaval joined #gluster
03:20 decimoe joined #gluster
03:20 gildub joined #gluster
03:20 coredump joined #gluster
03:20 balacafalata joined #gluster
03:20 XpineX joined #gluster
03:20 mAd-1 joined #gluster
03:20 sjm joined #gluster
03:20 hagarth joined #gluster
03:20 haomai___ joined #gluster
03:20 qdk joined #gluster
03:20 elico joined #gluster
03:20 monotek joined #gluster
03:20 foster joined #gluster
03:20 marcoceppi joined #gluster
03:20 oxidane joined #gluster
03:20 DV__ joined #gluster
03:20 Dave2 joined #gluster
03:20 lanning joined #gluster
03:20 cristov joined #gluster
03:20 gomikemi1e joined #gluster
03:20 Slasheri_ joined #gluster
03:20 VerboEse joined #gluster
03:20 msciciel joined #gluster
03:20 irated joined #gluster
03:20 muhh joined #gluster
03:20 Gugge joined #gluster
03:20 the-me joined #gluster
03:20 torbjorn__ joined #gluster
03:20 kke joined #gluster
03:20 fraggeln joined #gluster
03:20 partner joined #gluster
03:20 tziOm joined #gluster
03:20 NuxRo joined #gluster
03:20 ultrabizweb joined #gluster
03:20 JonathanD joined #gluster
03:20 Kins joined #gluster
03:20 cyberbootje joined #gluster
03:20 delhage joined #gluster
03:20 tru_tru joined #gluster
03:20 Bardack joined #gluster
03:20 Ch3LL_ joined #gluster
03:20 bfoster joined #gluster
03:20 mibby joined #gluster
03:20 DanF joined #gluster
03:20 johnmwilliams__ joined #gluster
03:20 m0zes joined #gluster
03:20 yosafbridge joined #gluster
03:20 Alex joined #gluster
03:20 semiosis joined #gluster
03:20 sadbox joined #gluster
03:20 Georgyo joined #gluster
03:20 Nopik joined #gluster
03:20 eightyeight joined #gluster
03:20 JordanHackworth joined #gluster
03:20 pasqd joined #gluster
03:20 osiekhan3 joined #gluster
03:20 sputnik13 joined #gluster
03:20 nixpanic_ joined #gluster
03:20 NCommander joined #gluster
03:20 wgao joined #gluster
03:20 _NiC joined #gluster
03:20 carrar joined #gluster
03:20 tom[] joined #gluster
03:20 jezier joined #gluster
03:20 n0de_ joined #gluster
03:20 sauce joined #gluster
03:20 tg2 joined #gluster
03:20 edong23 joined #gluster
03:20 tomased joined #gluster
03:20 saltsa joined #gluster
03:20 d-fence joined #gluster
03:20 atrius` joined #gluster
03:20 _jmp__ joined #gluster
03:20 sspinner joined #gluster
03:20 vincent_vdk joined #gluster
03:20 coreping joined #gluster
03:20 hflai joined #gluster
03:20 asku joined #gluster
03:20 johnmark joined #gluster
03:20 portante joined #gluster
03:20 dblack joined #gluster
03:21 Edddgy joined #gluster
03:22 lkoranda joined #gluster
03:23 dblack joined #gluster
03:34 sputnik1_ joined #gluster
03:39 atinmu joined #gluster
03:43 cjanbanan joined #gluster
03:47 sputnik13 joined #gluster
03:48 hagarth joined #gluster
03:49 itisravi joined #gluster
03:50 shubhendu|lunch joined #gluster
04:03 kumar joined #gluster
04:07 glusterbot New news from newglusterbugs: [Bug 1119545] NUFA conflicts with virt group: result to mount failed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1119545>
04:09 RameshN joined #gluster
04:09 itisravi joined #gluster
04:09 MacWinner joined #gluster
04:13 hchiramm_ joined #gluster
04:19 bala joined #gluster
04:19 nishanth joined #gluster
04:21 Edddgy joined #gluster
04:34 kdhananjay joined #gluster
04:43 cjanbanan joined #gluster
04:43 hagarth joined #gluster
04:44 ndarshan joined #gluster
04:44 spandit joined #gluster
04:45 sahina joined #gluster
04:51 vpshastry joined #gluster
04:53 kdhananjay joined #gluster
04:55 bharata-rao joined #gluster
04:58 ppai joined #gluster
05:01 jrcresawn joined #gluster
05:04 pvh_sa joined #gluster
05:06 raghu joined #gluster
05:08 psharma joined #gluster
05:11 saurabh joined #gluster
05:15 prasanth joined #gluster
05:22 Edddgy joined #gluster
05:22 vpshastry joined #gluster
05:22 JoeJulian OH: "EMC is no better at serving consistent performance than your mother"
05:24 hchiramm_ joined #gluster
05:27 ppp joined #gluster
05:29 Philambdo joined #gluster
05:31 davinder16 joined #gluster
05:32 rastar joined #gluster
05:34 gildub joined #gluster
05:35 vpshastry joined #gluster
05:35 gildub joined #gluster
05:36 gildub joined #gluster
05:38 kdhananjay joined #gluster
05:39 sjm left #gluster
05:45 sputnik1_ joined #gluster
05:46 lalatenduM joined #gluster
05:47 atinmu joined #gluster
05:54 vkoppad joined #gluster
05:56 rjoseph joined #gluster
05:58 LebedevRI joined #gluster
06:03 bala joined #gluster
06:07 MacWinner joined #gluster
06:11 vpshastry joined #gluster
06:12 kdhananjay joined #gluster
06:16 ppp joined #gluster
06:17 prasanth_ joined #gluster
06:19 Thilam joined #gluster
06:22 sputnik1_ joined #gluster
06:23 Edddgy joined #gluster
06:26 atinmu joined #gluster
06:27 karnan joined #gluster
06:31 ricky-ti1 joined #gluster
06:31 ppai joined #gluster
06:36 davinder16 joined #gluster
06:37 glusterbot New news from newglusterbugs: [Bug 1119582] glusterd does not start if older volume exists <https://bugzilla.redhat.co​m/show_bug.cgi?id=1119582>
06:45 ekuric joined #gluster
06:59 rgustafs joined #gluster
07:00 DV__ joined #gluster
07:01 hagarth joined #gluster
07:04 ctria joined #gluster
07:05 deepakcs joined #gluster
07:06 eseyman joined #gluster
07:13 andreask joined #gluster
07:17 keytab joined #gluster
07:23 ghenry joined #gluster
07:23 ghenry joined #gluster
07:24 Edddgy joined #gluster
07:27 liquidat joined #gluster
07:35 giannello joined #gluster
07:37 Pupeno joined #gluster
07:39 fsimonce joined #gluster
07:40 pvh_sa joined #gluster
07:47 cjanbanan joined #gluster
07:58 JoeJulian purpleidea: need better SEO. ;)
07:58 purpleidea JoeJulian: i have zero intentional SEO that i do myself... i just write blog posts is all :(
07:59 JoeJulian I know. Just kidding.
07:59 JoeJulian The older article was in the first page of google results.
07:59 purpleidea JoeJulian: feel free to link to my blog or whatever the cool SEO kids do :P
07:59 purpleidea maybe gluster bot is to blame!
08:00 JoeJulian imho, top-update out-of-date articles saying so and linking to the newer version.
08:00 purpleidea PS: i wrote the 'disks' patches to vagrant-libvirt for test gluster separate drives...
08:00 purpleidea JoeJulian: yeah good idea. i will try and remember to do that tomorrow. i'm tired now :(
08:00 JoeJulian no kidding
08:00 JoeJulian why are you even awake?
08:01 purpleidea i had to write some code
08:01 purpleidea weird code, but it's done so i'm in bed now.
08:01 JoeJulian Goodnight.
08:01 purpleidea night!
08:01 JoeJulian wait 'till you see what I'm up to...
08:07 glusterbot New news from newglusterbugs: [Bug 1119628] [SNAPSHOT] USS: The .snaps directory shows does not get refreshed immediately if snaps are taken when I/O is in progress <https://bugzilla.redhat.co​m/show_bug.cgi?id=1119628>
08:09 xavih joined #gluster
08:12 Norky joined #gluster
08:16 andreask joined #gluster
08:24 Edddgy joined #gluster
08:26 kanagaraj joined #gluster
08:36 Slashman joined #gluster
08:43 cjanbanan joined #gluster
08:47 hagarth joined #gluster
08:51 ppai joined #gluster
08:53 crashmag joined #gluster
08:56 vpshastry joined #gluster
08:56 mcblady joined #gluster
08:57 mcblady hey guys, i am stuck with geo replication and not sure where to go next. i followed this documetation http://www.gluster.org/community/docume​ntation/index.php/HowTo:geo-replication and got this problem http://fpaste.org/114290/04117421/
08:57 glusterbot Title: HowTo:geo-replication - GlusterDocumentation (at www.gluster.org)
08:58 mcblady actually the fpaste is not mine, but i have the very same problem and didn't find a fix for it... could you help, please
08:59 mcblady version of glusterfs and geo is the latest one 3.5.1
09:01 cjanbanan joined #gluster
09:02 aravindavk joined #gluster
09:02 Norky joined #gluster
09:02 Humble joined #gluster
09:03 kaushal_ joined #gluster
09:03 nbalachandran joined #gluster
09:07 bala joined #gluster
09:08 glusterbot New news from newglusterbugs: [Bug 1115748] Bricks are unsync after recevery even if heal says everything is fine <https://bugzilla.redhat.co​m/show_bug.cgi?id=1115748>
09:13 cjanbanan joined #gluster
09:18 atinmu joined #gluster
09:20 shubhendu|lunch joined #gluster
09:22 ndarshan joined #gluster
09:25 Edddgy joined #gluster
09:30 ctria joined #gluster
09:34 rjoseph joined #gluster
09:39 cjanbanan joined #gluster
09:57 sahina joined #gluster
09:59 atinmu joined #gluster
10:00 rjoseph joined #gluster
10:00 ndarshan joined #gluster
10:00 shubhendu|lunch joined #gluster
10:00 andreask joined #gluster
10:00 vkoppad joined #gluster
10:00 bharata-rao joined #gluster
10:00 kumar joined #gluster
10:00 ThatGraemeGuy joined #gluster
10:00 StarBeast joined #gluster
10:00 balacafalata joined #gluster
10:00 XpineX joined #gluster
10:00 haomai___ joined #gluster
10:00 elico joined #gluster
10:00 monotek joined #gluster
10:00 oxidane joined #gluster
10:00 lanning joined #gluster
10:00 torbjorn__ joined #gluster
10:00 the-me joined #gluster
10:00 Gugge joined #gluster
10:00 muhh joined #gluster
10:00 irated joined #gluster
10:01 hagarth joined #gluster
10:01 DJClean joined #gluster
10:01 karnan joined #gluster
10:01 igorwidl joined #gluster
10:01 dencaval joined #gluster
10:01 decimoe joined #gluster
10:01 coredump joined #gluster
10:01 gomikemi1e joined #gluster
10:01 DanF joined #gluster
10:01 mibby joined #gluster
10:01 bfoster joined #gluster
10:01 Ch3LL_ joined #gluster
10:01 Bardack joined #gluster
10:01 tru_tru joined #gluster
10:01 delhage joined #gluster
10:01 cyberbootje joined #gluster
10:01 Kins joined #gluster
10:01 JonathanD joined #gluster
10:01 ultrabizweb joined #gluster
10:01 NuxRo joined #gluster
10:01 tziOm joined #gluster
10:01 partner joined #gluster
10:01 fraggeln joined #gluster
10:01 kke joined #gluster
10:01 DJClean joined #gluster
10:02 nishanth joined #gluster
10:03 qdk joined #gluster
10:03 foster joined #gluster
10:03 marcoceppi joined #gluster
10:03 Dave2 joined #gluster
10:03 _NiC joined #gluster
10:03 wgao joined #gluster
10:03 NCommander joined #gluster
10:03 nixpanic_ joined #gluster
10:03 osiekhan3 joined #gluster
10:03 pasqd joined #gluster
10:03 JordanHackworth joined #gluster
10:03 eightyeight joined #gluster
10:03 Nopik joined #gluster
10:03 Georgyo joined #gluster
10:03 sadbox joined #gluster
10:03 semiosis joined #gluster
10:03 Alex joined #gluster
10:03 yosafbridge joined #gluster
10:03 m0zes joined #gluster
10:03 johnmwilliams__ joined #gluster
10:03 Slasheri_ joined #gluster
10:03 VerboEse joined #gluster
10:03 msciciel joined #gluster
10:03 carrar joined #gluster
10:03 tom[] joined #gluster
10:03 jezier joined #gluster
10:03 edong23 joined #gluster
10:03 tomased joined #gluster
10:03 saltsa joined #gluster
10:03 d-fence joined #gluster
10:03 _jmp__ joined #gluster
10:03 sspinner joined #gluster
10:03 vincent_vdk joined #gluster
10:03 coreping joined #gluster
10:03 hflai joined #gluster
10:03 asku joined #gluster
10:03 johnmark joined #gluster
10:03 portante joined #gluster
10:08 17SAAF03J joined #gluster
10:08 Humble joined #gluster
10:08 Norky joined #gluster
10:08 aravindavk joined #gluster
10:08 ekuric joined #gluster
10:08 lalatenduM joined #gluster
10:08 mAd-1 joined #gluster
10:08 cristov joined #gluster
10:08 n0de_ joined #gluster
10:08 sauce joined #gluster
10:08 tg2 joined #gluster
10:08 atrius` joined #gluster
10:08 dblack joined #gluster
10:09 edong23 joined #gluster
10:10 rgustafs joined #gluster
10:11 jiqiren joined #gluster
10:12 Intensity joined #gluster
10:12 anotheral joined #gluster
10:12 dusmant joined #gluster
10:13 cjanbanan joined #gluster
10:23 rjoseph joined #gluster
10:26 Edddgy joined #gluster
10:33 kshlm joined #gluster
10:34 skulker joined #gluster
10:37 skulker gluster 3.5.1 on CentOS 6 2 storage servers 1 client. If I init a kernel panic on a storage server the glusterfs client will hang for 2 minutes. Is there a volume option to decrease the timeout?
10:37 ppai joined #gluster
10:39 NuxRo skulker: http://gluster.org/community/documentation/index​.php/Gluster_3.2:_Setting_Volume_Options#network.ping-timeout ?
10:39 glusterbot Title: Gluster 3.2: Setting Volume Options - GlusterDocumentation (at gluster.org)
10:40 skulker thanks NuxRo let me look into that.
10:42 skulker though I wonder if it will have an effect as when the kernel panic happens the node is still pingable  and peer probe does connect in this state.
10:47 ctria joined #gluster
10:50 bala joined #gluster
11:04 shubhendu|lunch joined #gluster
11:04 ndarshan joined #gluster
11:07 B21956 joined #gluster
11:09 cjanbanan joined #gluster
11:17 sahina joined #gluster
11:17 atinmu joined #gluster
11:17 NuxRo skulker: let me know if it works :D
11:19 skulker Thanks NuxRo: actually it appeared to have worked very well. I'm testing an HA storage ftp server and sftp doesnt wanna failover hangs until that storage server is available again. Direct writes continue w/o issue however.
11:19 ppai joined #gluster
11:21 RioS2 For general purpose HA storage with both small and large files, on a mix of non-raid hardware, is Distributed Replicated Volumes the best algo?
11:22 NuxRo RioS2: probably, yes
11:22 NuxRo though I would use RAID on the machines if I were you
11:23 RioS2 ah, it's not like hdfs then?
11:23 NuxRo I don't know HDFS
11:23 NuxRo what did you mean by it?
11:24 RioS2 with hdfs you give it the most raw disk capacity and it takes care of the replication
11:24 RioS2 so if you loose a disk there are other copies and it re-replicates to maintain a specified block replication amount
11:26 NuxRo RioS2: glusterfs replicated volume works similarly
11:26 NuxRo but personally I have found the hassle of replacing a dead brick bigger than just replacing a raid member
11:27 Edddgy joined #gluster
11:29 RioS2 could be bad though to loose 2 of 8 disk in raid 5
11:31 rjoseph joined #gluster
11:31 RameshN joined #gluster
11:32 kanagaraj joined #gluster
11:34 kanagaraj joined #gluster
11:34 gildub joined #gluster
11:36 RioS2 gluster leaves it up the user to make sure replica bricks are not on the same node?
11:36 RioS2 hm. I was hoping I could just give it some storage and it would figure that part out...
11:37 hagarth joined #gluster
11:41 diegows joined #gluster
11:49 NuxRo RioS2: when you create the volume just make sure to define the bricks in order and from different servers
11:49 NuxRo eg, gluster volume create BLAH replica 2 server1:/brick server2:/brick etc
12:12 itisravi_ joined #gluster
12:12 chirino joined #gluster
12:18 ppai joined #gluster
12:24 gmcwhistler joined #gluster
12:28 Edddgy joined #gluster
12:37 plarsen joined #gluster
12:39 anoopcs joined #gluster
12:40 hagarth1 joined #gluster
12:41 doo joined #gluster
12:44 theron joined #gluster
12:46 gmcwhistler joined #gluster
13:14 hagarth joined #gluster
13:16 julim joined #gluster
13:25 davinder16 joined #gluster
13:28 Edddgy joined #gluster
13:35 japuzzo joined #gluster
13:37 _Bryan_ joined #gluster
13:39 glusterbot New news from newglusterbugs: [Bug 1119774] background meta-data data missing-entry self-heal failed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1119774>
13:45 nueces joined #gluster
13:49 xavih joined #gluster
13:56 jobewan joined #gluster
14:00 deepakcs joined #gluster
14:01 mortuar joined #gluster
14:13 rwheeler joined #gluster
14:15 nshaikh joined #gluster
14:15 anoopcs joined #gluster
14:16 wushudoin joined #gluster
14:19 doo joined #gluster
14:25 chirino joined #gluster
14:27 Thilam Hi guys, I've put version 3.5.1 of glusterfs in a prod env since friday and I have some weird issues
14:28 Thilam I have 3 bricks and, unexpectedly one of them goes down
14:28 Thilam Brick projet1:/glusterfs/projets-brick1/projets         49154   Y       19080
14:28 Thilam Brick projet2:/glusterfs/projets-brick2/projets         N/A     N       300
14:28 Thilam Brick projet3:/glusterfs/projets-brick3/projets         49154   Y       27436
14:28 Thilam for example
14:29 Thilam I had a look in brick logs but I see nothing
14:29 Edddgy joined #gluster
14:30 Thilam to get the fs work again, I kill glusterfs processes on the brick and restart the glusterfs-server daemon
14:30 Thilam do you have an idea on how I can debug this issue?
14:33 ndevos Thilam: it looks as if the glusterfsd for projet2:/glusterfs/projets-brick2/projets exited (crashed?)
14:34 ndevos Thilam: sometimes /var/log/messages contains a note about a segmentation fault, did you check there?
14:34 Thilam yes it crashes
14:34 simulx joined #gluster
14:35 ndevos Thilam: also, if a brick process is missing, you should be able to get it back with: gluster volume start $VOL force
14:35 Thilam ok, that is good to know
14:35 Thilam there is nothing in /var/log/messages
14:36 B21956 joined #gluster
14:36 Thilam [2014-07-15 14:06:18.650287] W [socket.c:522:__socket_rwv] 0-management: readv on /var/run/c1c9bfa44ddab2507d64407ec7119cef.socket failed (No data available)
14:36 Thilam [2014-07-15 14:06:18.650864] I [glusterd-handler.c:3713:__​glusterd_brick_rpc_notify] 0-management: Disconnected from projet2:/glusterfs/projets-brick2/projets
14:36 Thilam this is what I have in  etc-glusterfs-glusterd.vol.log
14:37 Thilam of brick2
14:38 ndevos do you have anything with a close timestamp in /var/log/glusterfs/bricks/glust​erfs-projets-brick2-projets.log (or similar filename)
14:38 anoopcs joined #gluster
14:39 ndevos ~paste | Thilam
14:39 glusterbot Thilam: For RPM based distros you can yum install fpaste, for debian and ubuntu it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
14:39 bharata-rao joined #gluster
14:40 Thilam no nothing until I restart the daemon
14:41 Thilam yes I got something
14:41 Thilam paste incoming
14:42 Thilam http://fpaste.org/118116/14054353/
14:42 glusterbot Title: #118116 Fedora Project Pastebin (at fpaste.org)
14:42 Thilam I think it's what you are looking for
14:43 ndevos Thilam: right, that's definitely helpful, any messages just before that?
14:45 Thilam [2014-07-15 08:00:03.280108] E [posix.c:3565:posix_getxattr] 0-projets-posix: getxattr failed on /glusterfs/projets-brick2/projets/.glusterfs​/9f/eb/9febf8a5-b25b-4173-b766-b59e1572249f: trusted.glusterfs.dht.linkto (No such file or directory)
14:45 Thilam [2014-07-15 12:00:01.800963] E [posix.c:3565:posix_getxattr] 0-projets-posix: getxattr failed on /glusterfs/projets-brick2/projets/.glusterfs​/9f/eb/9febf8a5-b25b-4173-b766-b59e1572249f: trusted.glusterfs.dht.linkto (No such file or directory)
14:45 Thilam those two
14:45 Thilam but they happen 2 hours before
14:45 ndevos okay, those are not important then
14:45 Thilam so I think it's not link
14:45 Thilam ed
14:46 Thilam I'm sorry, I'm full of question/problems since I'm using gluster :/
14:47 ndevos Thilam: we're here and on the ,,(mailing lists) to answer them :)
14:47 glusterbot Thilam: I do not know about 'mailing lists', but I do know about these similar topics: 'mailing list', 'mailinglist', 'mailinglists'
14:48 ndevos @mailinglists
14:48 glusterbot ndevos: http://www.gluster.org/interact/mailinglists
14:48 ndevos @learn mailing lists as http://www.gluster.org/interact/mailinglists
14:48 glusterbot ndevos: The operation succeeded.
14:50 ndevos Thilam: if you are on an rpm based distro, you may have a coredump of the crashed brick, could you check under /var/spool/abrt?
14:50 Thilam I'm on debian wheezy
14:50 Thilam !=
14:50 Thilam :)
14:50 ndevos hmm, no idea how/where it works there
14:51 Thilam is gluster is more stable under an rmp distro ?
14:51 ndevos that should not really matter, but most devs will be working on fedora/rhel/centos, so that tends to be tested more often
14:54 ira joined #gluster
14:54 Thilam [16:50] <ndevos> hmm, no idea how/where it works there
14:54 Thilam neither do I
14:58 ndevos Thilam: maybe you're lucky and a coredump is saved as /core.* ?
14:59 Thilam I think it is not activated
15:00 ndevos oh, ok, that would be a shame
15:01 johnmark kkeithley: ping re: centos storage sig
15:01 ndevos Thilam: do you know if debuginfo is available on your system?
15:01 johnmark kkeithley: so I thought the point of the SIG was so that there would be packages in centos 7
15:02 ndevos Thilam: could you try to execute this? gdb /usr/lib/x86_64-linux-gnu/glusterf​s/3.5.1/xlator/features/marker.so
15:02 ndevos Thilam: and, when at the <gdb> prompt: l *mq_loc_fill_from_name+0x86
15:02 Thilam debuginfo is not available natively
15:03 ndevos oh :-/
15:03 Thilam I'm installing gdb
15:03 harish joined #gluster
15:04 Thilam 0x14896 is in mq_loc_fill_from_name (marker-quota.c:178).
15:04 Thilam 173     marker-quota.c: No such file or directory.
15:05 Thilam this is the result
15:05 ndevos oh, well, at least that shows a line number :)
15:10 Thilam it helps ? :)
15:11 lmickh joined #gluster
15:12 B21956 joined #gluster
15:12 anoopcs joined #gluster
15:13 cjanbanan joined #gluster
15:13 ndevos Thilam: yes, it helps, but I do not see why it would be a problem, or if it could have been fixed in he master branch
15:14 chirino joined #gluster
15:14 ndevos Thilam: could you file a bug against the marker component, add the call-trace from the fpaste, and the output of the gdb-command
15:14 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
15:15 ndevos Thilam: mention a little how your volume is configured/used (NFS? geo-rep? quota? ...), and include 'gluster volume info $VOLUME'
15:15 coredump joined #gluster
15:15 Thilam yes I can, I'll also activate coredump
15:15 ekuric left #gluster
15:15 Thilam and I'll update the bug with this info if it crashes again
15:16 ndevos Thilam: on the line gdb points out, a quite insecure string-access is done - at least, its dangerous in case the string is empty, but we cant see if that is the case without a coredump
15:17 Thilam this is no marker component in the list
15:17 ndevos oh, lets see whats there
15:18 ndevos Thilam: I'd use quota in that case :)
15:19 bala joined #gluster
15:19 anoopcs1 joined #gluster
15:23 lmickh joined #gluster
15:25 theron joined #gluster
15:27 edong23 joined #gluster
15:30 Edddgy joined #gluster
15:33 aub joined #gluster
15:35 aub if i want a 2TB volume and 2x replication, can I create four nodes that each have a 1TB disk on them as the cluster? or is the volume size limited to the disk size?
15:36 Humble joined #gluster
15:37 ndevos aub: you can 'create four nodes that each have a 1TB disk' to get a 2TB (usable space) distribute-replicate volume
15:38 aub ndevos: nice, thanks
15:38 ndevos aub: the maximum size of a file would be limited to 1TB, so thats something you need to be aware of
15:38 aub ndevos: makes sense
15:39 aub ndevos: could i do the same with two nodes, each having two 1TB drives?
15:39 aub ndevos: i’m guessing that would wreck the replication
15:39 glusterbot New news from newglusterbugs: [Bug 1119827] Brick goes offline unexpectedly <https://bugzilla.redhat.co​m/show_bug.cgi?id=1119827>
15:39 skulker left #gluster
15:40 ndevos aub: you could do that too, but you will limit the number of clients/bandwidth that way
15:41 ndevos Thilam: thanks for the bug! I just found bug 1118591, it sounds quite similar
15:41 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1118591 urgent, urgent, ---, vshastry, POST , core: all brick processes crash when quota is enabled
15:41 aub ndevos: makes sense. thanks!
15:42 clutchk joined #gluster
15:43 anoopcs joined #gluster
15:44 _dist joined #gluster
15:48 Matthaeus joined #gluster
15:50 davinder16 joined #gluster
15:52 vpshastry joined #gluster
15:54 DanishMan joined #gluster
16:01 dblack joined #gluster
16:03 aravindavk joined #gluster
16:04 nage joined #gluster
16:07 theron_ joined #gluster
16:10 ramteid joined #gluster
16:10 Mo__ joined #gluster
16:19 aub joined #gluster
16:19 nbalachandran joined #gluster
16:30 Edddgy joined #gluster
16:32 SFLimey joined #gluster
16:33 cultav1x joined #gluster
16:38 chirino joined #gluster
16:43 Peter4 joined #gluster
16:43 cjanbanan joined #gluster
16:53 sputnik1_ joined #gluster
16:53 Matthaeus joined #gluster
17:01 Edddgy joined #gluster
17:02 MacWinner joined #gluster
17:03 plarsen joined #gluster
17:06 zerick joined #gluster
17:08 vpshastry joined #gluster
17:08 anoopcs1 joined #gluster
17:16 mcblady joined #gluster
17:17 sjm joined #gluster
17:29 rotbeard joined #gluster
17:34 chirino joined #gluster
17:38 clutchk joined #gluster
17:39 igorwidl left #gluster
17:43 cjanbanan joined #gluster
17:46 tdasilva joined #gluster
17:49 theron joined #gluster
18:01 dberry joined #gluster
18:01 dberry joined #gluster
18:13 plarsen joined #gluster
18:14 hchiramm_ joined #gluster
18:15 lalatenduM joined #gluster
18:24 Matthaeus joined #gluster
18:31 _dist JoeJulian: did you get my msg yesterday?
18:33 bene2 joined #gluster
18:36 JoeJulian Just saw you ask if I was here.
18:37 _dist it was a pm
18:38 JoeJulian gah! I'm always missing those.
18:39 _dist tried again :)
18:39 JoeJulian Oh, they're still there, I just never notice them because they show up at the bottom of the channel list and I seldom scroll down.
18:40 JoeJulian Your big problem is that I started a new job 6 weeks ago and am cleaning up messes from before I started.
18:40 JoeJulian I'm really close to having free time again.
18:40 JoeJulian Probably this week.
18:41 _dist ok, we're just coming up on a deadline, I doubt that you pointing me in the right direction to do my own research would be as efficient, or even fruitfull
18:41 JoeJulian I'd be looking at wireshark and straces most likely.
18:42 JoeJulian back in a few...
18:44 qdk joined #gluster
18:48 giannello joined #gluster
19:00 sjm left #gluster
19:04 hchiramm_ joined #gluster
19:10 glusterbot New news from newglusterbugs: [Bug 1119894] Glustershd memory usage too high <https://bugzilla.redhat.co​m/show_bug.cgi?id=1119894>
19:13 cjanbanan joined #gluster
19:15 ekuric joined #gluster
19:27 _dist JoeJulian: If you want a pcap, and let me know how to run the trace I can do that. However, I suspect you'd just end up wanting to tweak it on frist review so perhaps waiting is more sensible
19:40 glusterbot New news from newglusterbugs: [Bug 1075417] Spelling mistakes and typos in the glusterfs source <https://bugzilla.redhat.co​m/show_bug.cgi?id=1075417>
19:41 jmarley joined #gluster
19:47 rwheeler joined #gluster
19:50 Matthaeus joined #gluster
19:51 tdasilva_ joined #gluster
19:52 Matthaeus joined #gluster
20:07 zerick joined #gluster
20:08 gildub joined #gluster
20:14 tdasilva joined #gluster
20:14 vpshastry joined #gluster
20:15 Matthaeus joined #gluster
20:17 StarBeast Hi all. I have a question. I am running #gluster v heal <volname> info healed command and I always see that heal daemon heals something every 600 sec. Does this mean that I have brick connectivity flashing of something else? or it is normal behaviour?
20:23 _dist StarBeast: that's not normal, you running it with a watch or, you might on occasion see "another transaction is in progress"
20:26 StarBeast I am not running this in watch, just every few minuted I am checking what has been healed, and I see bunch on files in this list. gluster v heal <volname> info tells me nothing, all zeroes
20:27 johnmark apologies, but the SSL on gluster.org was misconfigured
20:27 johnmark but has now been fixed
20:31 StarBeast _dist: Using gluster 3.5.1
20:32 _dist StarBeats: I'm still using 3.4.2, but I've never seen it "occasionally" say the daemon isn't running. I'd check your etc and brick logs for errors around that time
20:33 semiosis StarBeast: what file is being healed every 10 min?  same file?  different?  the ,,(gfid resolver) may help if you dont know the filename
20:33 glusterbot StarBeast: https://gist.github.com/4392640
20:35 StarBeast semiosis: Hm.. something seems wrong. I've restarted one server, but still see nothing in heal info log :(
20:36 StarBeast semiosis: files in "healed" list always different
20:40 StarBeast semiosis: I don't know gfid, files are listed by filenames
20:42 jmarley joined #gluster
20:48 StarBeast I am struggling with this issue for weeks now. :( my vm's always go to read-only mode under high load. Need some help.
20:52 _dist StarBeast: I came across this earlier today, but I'm not educated on it enough to say if it's related https://bugzilla.redhat.co​m/show_bug.cgi?id=1116854
20:52 glusterbot Bug 1116854: unspecified, unspecified, ---, skoduri, POST , libgfapi: Issue with self-healing of files during glfs_resolve
20:52 _dist (you'd have to be using libgfapi instead of fuse)
20:52 sjm joined #gluster
20:53 _dist I'm heading out, be back tommorrow :)
20:54 StarBeast _dist: see u
21:07 VeggieMeat joined #gluster
21:07 avati joined #gluster
21:07 fyxim_ joined #gluster
21:07 verdurin joined #gluster
21:07 masterzen joined #gluster
21:11 swebb joined #gluster
21:12 eryc joined #gluster
21:17 andreask joined #gluster
21:19 cyberbootje joined #gluster
21:49 plarsen joined #gluster
21:51 burn420 joined #gluster
21:56 Rydekull joined #gluster
21:59 tdasilva joined #gluster
22:04 Pupeno_ joined #gluster
22:14 mcblady joined #gluster
22:40 mcblady joined #gluster
22:48 dencaval joined #gluster
22:49 sage joined #gluster
22:56 fidevo joined #gluster
23:08 andreask joined #gluster
23:08 theron joined #gluster
23:10 cyberbootje joined #gluster
23:10 cyberbootje joined #gluster
23:12 theron joined #gluster
23:39 Pupeno joined #gluster
23:40 diegows joined #gluster
23:47 burnalot joined #gluster
23:58 _Bryan_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary