Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2013-11-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 mattappe_ joined #gluster
00:11 gdubreui joined #gluster
00:12 harish_ joined #gluster
00:13 mattapp__ joined #gluster
00:15 gdubreui joined #gluster
00:16 haritsu joined #gluster
00:34 _ilbot joined #gluster
00:34 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
00:40 an joined #gluster
01:00 haritsu joined #gluster
01:02 khushildep joined #gluster
01:05 glusterbot joined #gluster
01:29 _BryanHm_ joined #gluster
01:32 mattapp__ joined #gluster
01:33 raghug joined #gluster
01:36 _pol joined #gluster
01:56 harish_ joined #gluster
02:00 haritsu joined #gluster
02:02 bala joined #gluster
02:05 glusterbot New news from newglusterbugs: [Bug 1033275] The glusterfs-geo-replication RPM missing dependency on python-ctypes <http://goo.gl/fw8PV6>
02:26 khushildep joined #gluster
02:33 ikk joined #gluster
02:35 rjoseph joined #gluster
02:44 johnbot11 joined #gluster
02:46 Guest18713 joined #gluster
03:00 jlauro joined #gluster
03:01 johnbot11 joined #gluster
03:01 haritsu joined #gluster
03:05 johnbot11 joined #gluster
03:09 johnbot11 joined #gluster
03:11 johnbot1_ joined #gluster
03:12 johnbo___ joined #gluster
03:12 y4m4_ joined #gluster
03:26 bharata-rao joined #gluster
03:34 johnbot11 joined #gluster
03:38 RameshN joined #gluster
03:38 shubhendu joined #gluster
03:43 kanagaraj joined #gluster
03:54 shyam joined #gluster
03:55 itisravi joined #gluster
03:56 anands joined #gluster
03:58 an joined #gluster
04:02 haritsu joined #gluster
04:02 johnbot11 joined #gluster
04:03 johnbot1_ joined #gluster
04:06 mattapp__ joined #gluster
04:11 mattapp__ joined #gluster
04:12 chirino joined #gluster
04:32 gdubreui joined #gluster
04:34 shri_ joined #gluster
04:34 shri_ joined #gluster
04:35 johnbot11 joined #gluster
04:41 khushildep joined #gluster
04:43 gdubreui joined #gluster
04:45 shri_ joined #gluster
04:45 bulde joined #gluster
04:46 vpshastry joined #gluster
04:51 davidbierce joined #gluster
04:52 davidbierce joined #gluster
04:53 bala joined #gluster
04:56 shyam joined #gluster
04:57 MiteshShah joined #gluster
04:59 saurabh joined #gluster
05:02 mattapp__ joined #gluster
05:03 haritsu joined #gluster
05:03 mattapp__ joined #gluster
05:04 CheRi joined #gluster
05:06 satheesh joined #gluster
05:09 rjoseph joined #gluster
05:12 spandit joined #gluster
05:18 shruti joined #gluster
05:22 dusmant joined #gluster
05:22 shyam joined #gluster
05:22 lalatenduM joined #gluster
05:23 nshaikh joined #gluster
05:25 johnbot11 joined #gluster
05:26 johnbot11 joined #gluster
05:33 calum_ joined #gluster
05:35 sgowda joined #gluster
05:36 rastar joined #gluster
05:40 mattapp__ joined #gluster
05:50 vshankar joined #gluster
05:51 aravindavk joined #gluster
05:53 bulde joined #gluster
05:53 raghu joined #gluster
05:58 psharma joined #gluster
06:02 nshaikh joined #gluster
06:02 mohankumar joined #gluster
06:04 haritsu joined #gluster
06:05 glusterbot New news from newglusterbugs: [Bug 1035107] RFE: new FOP called discover() for glusterfs. <http://goo.gl/TtqNvf>
06:19 harish_ joined #gluster
06:47 sgowda joined #gluster
06:48 davinder joined #gluster
07:05 mattappe_ joined #gluster
07:08 haritsu joined #gluster
07:10 krypto joined #gluster
07:12 haritsu joined #gluster
07:14 ngoswami joined #gluster
07:20 jtux joined #gluster
07:24 haritsu joined #gluster
07:40 rastar joined #gluster
07:47 satheesh joined #gluster
07:49 haritsu joined #gluster
07:50 haritsu joined #gluster
07:55 shri_ joined #gluster
07:56 ekuric joined #gluster
08:06 klaxa|work joined #gluster
08:08 StarBeast joined #gluster
08:08 satheesh joined #gluster
08:09 _polto_ joined #gluster
08:10 eseyman joined #gluster
08:15 ctria joined #gluster
08:18 cca joined #gluster
08:20 keytab joined #gluster
08:21 Staples84 joined #gluster
08:21 cca Hi, I'm trying out glusterFS and its geo-replication function in my company to create a mirror for our DVCS. As we have a lot of files (60GB) with millions of hardlinks I got some performance problems :-( The servers are located in two different countries and have a 20 Mbit/s internet connection between. Is it normal that it takes days! for just a small part of this data?
08:24 haritsu joined #gluster
08:24 an joined #gluster
08:26 anands joined #gluster
08:28 haritsu joined #gluster
08:30 shyam joined #gluster
08:37 mattappe_ joined #gluster
08:41 abyss^ Hi my all gluster clients freeze. How can I check what happen? I look to logs, but there's only about lost connection (now is fine). Ps show that are some yesterdays connections which write file to gluster disk and this processes are freeze. Strace on that processes show: Process 1431 attached - interrupt to quit, but not quit
08:43 tziOm joined #gluster
08:44 StarBeast joined #gluster
08:50 aravindavk joined #gluster
08:54 dusmant joined #gluster
08:58 ccha joined #gluster
08:59 bala joined #gluster
08:59 ccha joined #gluster
09:02 mohankumar__ joined #gluster
09:10 satheesh joined #gluster
09:10 rjoseph joined #gluster
09:11 andreask joined #gluster
09:14 an joined #gluster
09:19 bulde joined #gluster
09:22 tziOm joined #gluster
09:22 bulde joined #gluster
09:23 harish_ joined #gluster
09:25 ricky-ti1 joined #gluster
09:29 dusmant joined #gluster
09:29 haritsu joined #gluster
09:29 CheRi joined #gluster
09:29 dusmant joined #gluster
09:32 bala joined #gluster
09:34 Norky joined #gluster
09:35 cca Is someone able to answer my question?
09:37 hagarth joined #gluster
09:38 sgowda joined #gluster
09:41 anands joined #gluster
09:44 social cca: I would say it takes ages to traverse your gluster mount
09:44 lalatenduM joined #gluster
09:50 shubhendu joined #gluster
09:53 khushildep joined #gluster
09:56 ndarshan joined #gluster
09:57 kanagaraj joined #gluster
09:57 RameshN joined #gluster
09:59 GabrieleV joined #gluster
09:59 msvbhat cca: geo-rep performance depends a lot on your netwrok lateny and your data set. And which version of glusterfs are you using?
09:59 Norky joined #gluster
09:59 msvbhat cca: Older version of glusterfs doens't support hardlink syncing through geo-rep
10:02 kanagaraj joined #gluster
10:05 Norky joined #gluster
10:07 rnachimu joined #gluster
10:07 haritsu joined #gluster
10:07 Norky joined #gluster
10:09 vshankar joined #gluster
10:12 msv joined #gluster
10:17 andreask joined #gluster
10:23 shyam joined #gluster
10:23 an joined #gluster
10:27 mohankumar joined #gluster
10:28 bharata-rao joined #gluster
10:40 kanagaraj joined #gluster
10:40 andreask joined #gluster
10:46 aravindavk joined #gluster
10:47 ndarshan joined #gluster
10:47 anands joined #gluster
10:48 dusmant joined #gluster
10:50 muhh joined #gluster
10:54 shubhendu joined #gluster
10:57 meghanam joined #gluster
11:04 rjoseph joined #gluster
11:06 mohankumar joined #gluster
11:09 RameshN joined #gluster
11:11 shyam joined #gluster
11:22 dhyan joined #gluster
11:24 calum_ joined #gluster
11:27 lalatenduM joined #gluster
11:38 Staples84 joined #gluster
11:40 dhyan joined #gluster
11:41 dhyan left #gluster
11:44 _polto_ joined #gluster
11:44 diegows joined #gluster
11:44 _polto_ joined #gluster
11:46 dylan joined #gluster
11:47 shyam joined #gluster
11:48 sgowda joined #gluster
11:53 anands joined #gluster
11:55 andreask joined #gluster
11:59 haritsu_ joined #gluster
11:59 itisravi_ joined #gluster
11:59 lpabon joined #gluster
12:00 an joined #gluster
12:01 haritsu joined #gluster
12:03 klaxa|work left #gluster
12:05 kevein joined #gluster
12:06 geewiz joined #gluster
12:16 hagarth joined #gluster
12:17 mohankumar joined #gluster
12:19 andreask joined #gluster
12:20 davidbierce joined #gluster
12:21 CheRi joined #gluster
12:22 kkeithley joined #gluster
12:27 vpshastry1 joined #gluster
12:34 rastar joined #gluster
12:36 mohankumar joined #gluster
12:36 rcheleguini joined #gluster
12:37 haritsu joined #gluster
12:39 kevein joined #gluster
12:44 mattappe_ joined #gluster
12:44 dylan joined #gluster
12:45 haritsu joined #gluster
12:47 andreask joined #gluster
12:50 dylan Hi
12:50 glusterbot dylan: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:51 dylan Hi, I'm new to the GlusterFS but I was able to get the setup running
12:51 dylan gluster volume create gv0 replica 2 node61:/data/gv0/brick1 node160:/data/gv0/brick1
12:52 dylan everything is work fine and the problem is I flood the disk with logs of big file and made it 100% full
12:52 mohankumar joined #gluster
12:52 dylan Then I deleted them but df -h shows 100% even after the file deletion
12:53 dylan I have restart nodes but still the brick size shows 100%
12:53 dylan then I found .glusterfs taken the full size of the disk space.
12:53 dylan how can I clear this../
12:55 dylan btw, i am able to fill the disk with bigfiles even though it says 100%
12:57 kkeithley .glusterfs?  Do you mean /var/log/glusterfs/ ?
12:58 morse joined #gluster
12:59 dylan data/gv0/brick1 is my brick and it is in that directory
13:00 dylan glusterfs 3.4.1 built on Oct 28 2013 11:12:34
13:01 dylan it has many directories like 00  0b  11  25  32  41
13:01 dylan data/gv0/brick1/.glusterfs
13:01 samppah dylan: how and where did you delete that file?
13:02 dylan from the glusterfs server node
13:02 dylan not from the client
13:03 kkeithley the "files" in .glusterfs are hard links to the real files in the brick.
13:05 dylan Is that not advisable to delete from the glusterfs server node and it should be always from the client..?
13:05 kkeithley deleting them usually isn't going to free any space in the brick
13:06 giannello joined #gluster
13:07 kkeithley correct, you should not delete files in the bricks, only from a client.
13:08 dylan so the best way to clear this .gusterfs files is by manually or do we have a command..?
13:09 kkeithley the "files" in .glusterfs are hard links to the real files in the brick. They don't consume any space, you probably don't need to delete them.
13:10 kkeithley It's not clear to me how .glusterfs could be 100% full if the rest of the brick isn't.
13:10 kkeithley something else is messed up
13:11 dylan #du -sh /data/gv0/brick1/.glusterfs >> 3.2G    /data/gv0/brick1/.glusterfs
13:11 LoudNoises joined #gluster
13:12 dylan and I still can write data
13:12 kkeithley you haven't done something weird like mounted another volume at .glusterfs have you?
13:12 kkeithley fpaste df or mount output
13:12 dylan nope
13:15 dylan dev/mapper/VolGroup00-LogVol00
13:15 dylan 68G   65G     0 100% /
13:15 ndarshan joined #gluster
13:15 calum_ joined #gluster
13:15 dylan above is #df -h /dev/mapper/VolGroup00-LogVol00
13:16 dylan I do not have seperate partition but directory structure within /
13:16 dylan which is /data/gv0/brick1/.glusterfs
13:17 kkeithley okay
13:18 khushildep joined #gluster
13:18 dusmant joined #gluster
13:19 dylan is there any command that we could purge the files in the .glusterfs
13:19 kkeithley so, the "files" in .glusterfs are still just link. deleting them won't recover any space as long as the real files, i.e. your data in the brick, still exists. And deleting them will break other functionality.
13:19 kkeithley s/still just link/still just links/
13:19 glusterbot What kkeithley meant to say was: so, the "files" in .glusterfs are still just links. deleting them won't recover any space as long as the real files, i.e. your data in the brick, still exists. And deleting them will break other functionality.
13:22 dylan there is no real files as i have deleted them. and yes i understand now if i delete hardlink files inside the .glusterfs directory will break the hardlinks to already available real files in the brick
13:23 kkeithley see this http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
13:23 glusterbot <http://goo.gl/j981n> (at joejulian.name)
13:24 kkeithley okay, well, the way hard links work, since you deleted the real files the links in .glusterfs are still "holding on to the file."  I guess you deleted the real files on the server instead of from a client, otherwise the matching link would have been deleted automatically.
13:25 kkeithley /would/should/
13:28 kkeithley so you can just rm the files in .glusterfs.
13:28 dylan yes I deleted them from server
13:28 kkeithley right
13:29 dylan so we have to make sure that no one deleted files from the sevrer node though it is possible.
13:29 kkeithley I'd highly recommend that you not put your bricks on the same volume anything else, e.g. your root fs.
13:29 dylan so the problem is we don't know exact hardlink file in the .glusterfs when there are other files linked
13:30 kkeithley when there are other files linked?
13:32 dylan lets say we have deleted couple of files while there are huge files. so we don't know exact hardlink file in .glusterfs
13:32 johnmwilliams joined #gluster
13:32 dylan is that my understanding is correct?
13:33 dylan or if i delete wrong file in .glusterfs, will it re-create the hardlink inside the .gluserfs dir
13:34 kkeithley I'm sure what you're asking. In the general case you don't need to know about the links in .glusterfs.  Huge files don't change anything. If you go under the covers and delete the files in .glusterfs I can't say for certain, but they probably won't be recreated.
13:37 jiphex joined #gluster
13:37 dylan so bottom line we should not delete real files in glusterfs node
13:38 Comnenus joined #gluster
13:38 kkeithley you must never delete files directly in the brick. Only from the client
13:39 kkeithley any client
13:40 dylan thanks kkeithley for the support; it is really appreciate. now i can go further of my testing
13:41 fyxim joined #gluster
13:42 kkeithley I highly recommend that you create your bricks on separate volumes.
13:45 dylan all right! since this is testing i just use root fs. i will keep this note on prodction; thanks again
13:47 ira joined #gluster
13:52 davidbierce joined #gluster
13:58 gkleiman joined #gluster
13:58 lpabon joined #gluster
13:59 hagarth1 joined #gluster
14:05 Technicool joined #gluster
14:06 haritsu joined #gluster
14:13 geewiz joined #gluster
14:13 chirino joined #gluster
14:14 davidbierce joined #gluster
14:16 avati joined #gluster
14:19 plarsen joined #gluster
14:23 haritsu_ joined #gluster
14:25 dusmant joined #gluster
14:26 dusmant joined #gluster
14:27 haritsu joined #gluster
14:28 haritsu joined #gluster
14:29 mattappe_ joined #gluster
14:36 red-lichtie joined #gluster
14:36 _polto_ joined #gluster
14:37 red-lichtie left #gluster
14:37 dbruhn joined #gluster
14:40 davidbierce joined #gluster
14:43 red-lichtie joined #gluster
14:43 red-lichtie Hi. I'm having a serious issue with glusterfs 3.4 and PHP
14:43 tqrst joined #gluster
14:43 tqrst :O
14:44 red-lichtie "PHP Warning:  filemtime(): stat failed for data/check.txt in /var/www/owncloud/ftime.php on line 13"
14:44 avati red-lichtie: anything in the logs?
14:44 dbruhn red-lichtie, have you tried to stat the file from the file system?
14:44 red-lichtie I've set up ownCloud data area on a gluster mount and it fails
14:44 red-lichtie Stat from FS works
14:45 red-lichtie "data/check.txt was last modified: January 01 1970 01:00:00. "
14:45 dbruhn have you tried to open the file or anything else?
14:46 red-lichtie www-data@bbb-1:~/owncloud$ stat data/check.txt - Modify: 2013-11-27 14:22:44.887881000 +0000
14:46 red-lichtie It fails all over the place in ownCloud
14:46 calum_ joined #gluster
14:47 red-lichtie www-data@bbb-1:~/owncloud$ "stat data/check.txt"   --->   "Modify: 2013-11-27 14:22:44.887881000 +0000"
14:47 red-lichtie I su'ed to www-data to make sure
14:47 avati red-lichtie: anything in the mount logs?
14:47 avati that would be /var/log/glusterfs/<mount-name>.log
14:49 red-lichtie only I and W entries
14:49 avati anything around the time you got those errors?
14:50 red-lichtie No, fresh install. I'm building a mini cluster
14:51 dbruhn__ joined #gluster
14:52 red-lichtie 2 mirrored systems (mariadb+galera, repcached, haproxy, ucarp and glusterfs)
14:52 red-lichtie Only glusterfs is doing this to owncloud
14:53 red-lichtie mount in fstab is "bbb-1:gv-ocdata /var/www/owncloud/data glusterfs defaults,_netdev 0 0"
14:53 avati red-lichtie: i meant, anything in the gluster logs at the time of the errors? can you mount it in debug mode and re-run the php app?
14:53 red-lichtie How do I mount debug ?
14:53 red-lichtie I'll do all I can to pinpoint this
14:57 B21956 joined #gluster
14:59 red-lichtie avati: in dbug there are loads of "[afr-self-heal-common.c:887:afr_mark_sources] 0-gv-ocdata-replicate-0: Number of sources: 0"
14:59 red-lichtie "[afr-common.c:1380:afr_lookup_select_read_child] 0-gv-ocdata-replicate-0: Source selected as 0 for /"
14:59 red-lichtie And stuff
15:00 red-lichtie I guess that is standard?
15:03 shubhendu joined #gluster
15:07 LoudNoises did you install gluster on top of data that was there, or did you copy it over fresh via a client machine?
15:07 red-lichtie LoudNoises: I only added data at the mount point, directories were empty before I started
15:09 MiteshShah joined #gluster
15:10 red-lichtie Maybe a php error and not a glusterfs ?
15:10 avati red-lichtie: can you boil it down to a simple php test case?
15:10 red-lichtie The stat on the php file works fine
15:10 red-lichtie I did
15:11 red-lichtie avati:
15:11 red-lichtie 2 secs, going to pastebin
15:13 red-lichtie avati: http://pastebin.com/mi4pEM9C
15:13 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:14 kobiyashi Does anybody know what this message means?
15:14 kobiyashi [2013-11-27 15:13:20.224859] E [marker-quota-helper.c:229:mq_dict_set_contribution] (-->/usr/lib64/glusterfs/3.4.1/xlator/debug/io-stats.so(io_stats_lookup+0x157) [0x7f121ef662e7] (-->/usr/lib64/glusterfs/3.4.1/xlator/features/marker.so(marker_lookup+0x2f8) [0x7f121f17bfc8] (-->/usr/lib64/glusterfs/3.4.1/xlator/features/marker.so(mq_req_xattr+0x3c) [0x7f121f185f1c]))) 0-marker: invalid argument: loc->parent
15:14 lmello joined #gluster
15:14 red-lichtie avati: http://fpaste.org/57210/
15:14 glusterbot Title: #57210 Fedora Project Pastebin (at fpaste.org)
15:15 calum_ joined #gluster
15:18 avati red-lichtie: does just running that recreate the issue?
15:18 avati just stating from php?
15:19 red-lichtie Yes, it creates the issue
15:19 red-lichtie http://fpaste.org/57216/55655211/
15:19 glusterbot Title: #57216 Fedora Project Pastebin (at fpaste.org)
15:19 wushudoin joined #gluster
15:19 red-lichtie avati: output of PHP is : "ftime.php was last modified: November 27 2013 14:31:14.
15:19 red-lichtie data/check.txt was last modified: January 01 1970 01:00:00."
15:20 avati red-lichtie: the error is something else, right?
15:20 red-lichtie Error in php log is: " PHP Warning:  filemtime(): stat failed for data/check.txt in /var/www/owncloud/ftime.php on line 13"
15:20 red-lichtie Well, apache error.log
15:22 avati red-lichtie: stat failed == ENOENT?
15:22 red-lichtie No idea
15:22 red-lichtie I'm not a php developer, I wouldn't know where to find that
15:23 avati red-lichtie: the problem is the log is not very indicative of what the problem is :S
15:24 LoudNoises if you run that php script from the command line it has the same issue, right? i.e. it's not just apache
15:24 red-lichtie Even with absolute path: ftime.php was last modified: November 27 2013 14:31:14.
15:24 red-lichtie data/check.txt was last modified: January 01 1970 01:00:00.
15:24 red-lichtie From the command line?
15:24 red-lichtie just php file ?
15:24 avati ah
15:25 avati you think the wrong date is the reason of "stat failed"?
15:25 LoudNoises i think so?  it might be and argument, it's been a while for me and php
15:25 LoudNoises php -f file.php
15:25 red-lichtie Same from the command line
15:25 LoudNoises i think the -f is optional
15:25 red-lichtie with an extra warning about timezone
15:26 red-lichtie http://fpaste.org/57220/65983138/
15:26 glusterbot Title: #57220 Fedora Project Pastebin (at fpaste.org)
15:27 dbruhn__ l
15:28 red-lichtie LoudNoises: I've set the timzone now, same issue
15:28 RameshN joined #gluster
15:29 red-lichtie With timezone set: http://fpaste.org/57224/55661771/
15:29 glusterbot Title: #57224 Fedora Project Pastebin (at fpaste.org)
15:31 LoudNoises i mean this seems like a php issue because things are working as you'd expect from the command line, correct?
15:32 LoudNoises i see some cryptic references to file permissions being a potential issue with stat() in php, so it might be worth testing with extremely open permissions briefly to see if that helps
15:33 red-lichtie but the user can stat correctly
15:34 LoudNoises yea, so you could try doing an echo exec('stat data/check.txt'); just to see if it's php's stat vs php calling linux's stat, but my suspicion is that the exec will work fine
15:34 red-lichtie http://fpaste.org/57227/13855664/
15:34 glusterbot Title: #57227 Fedora Project Pastebin (at fpaste.org)
15:34 red-lichtie exec works fine
15:35 red-lichtie Ahh, in the php file
15:35 red-lichtie will do
15:36 rjoseph joined #gluster
15:38 red-lichtie LoudNoises: Adds "Birth: - " to the output, the last line of exec stat
15:39 red-lichtie but it works fine for the ext4 fs where ftime.php is stored
15:40 LoudNoises works fine meaning it prints out all the lines?
15:40 red-lichtie Only the last line, but it does that for both of them, no error
15:41 red-lichtie I guess php echo doesn't do new lines
15:42 LoudNoises yea i think you can give exec() a variable as a second argument and then print_r() that to get all the lines of output
15:45 shubhendu joined #gluster
15:46 vpshastry joined #gluster
15:48 red-lichtie No idea where to go from here.
15:49 LoudNoises the size of data/check.txt is actually 0 bytes like stat reports, yes?
15:50 lkoranda_ joined #gluster
15:51 red-lichtie LoudNoises: Yes, I only did a touch on it
15:52 red-lichtie PHP 5.4.4 + GLusterFS 3.2 works, PHP 5.5.1 + GlusterFS 3.4 fails
15:52 rotbeard joined #gluster
15:52 red-lichtie I haven't tried other combinations (one is Debian Wheezy the other is Debian Jessie)
15:53 red-lichtie I need Jessie for other dependancies
15:53 kaptk2 joined #gluster
15:53 LoudNoises it seems to work okay on my setup (php 5.3.3, glusterfs 3.3.2) so perhaps when some of the more knowledgeable folks get in they'll know what's up
15:54 red-lichtie OK, thanks for trying it on your system
15:57 dylan_ joined #gluster
16:01 RameshN joined #gluster
16:01 haritsu joined #gluster
16:07 jbrooks joined #gluster
16:10 red-lichtie LoudNoises: php5-common: All date functions result in a fatal PHP error on glusterfs filesystems
16:10 red-lichtie http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=697800
16:10 glusterbot <http://goo.gl/zADmYJ> (at bugs.debian.org)
16:11 vpshastry joined #gluster
16:11 _polto_ joined #gluster
16:11 nueces joined #gluster
16:12 LoudNoises wow, that seems crazy.  he doesn't mention his gluster version and it's a bit old - are you seeing that same behavior ?
16:13 LoudNoises (it doesn't happen on my cent or ubuntu systems - i don't have any debian to test)
16:13 red-lichtie No, not exactly the same but maybe related
16:14 red-lichtie stat(file) -> "Warning: stat(): stat failed for /var/www/owncloud/data/check.txt in /var/www/owncloud/ftime.php on line 19"
16:15 haritsu joined #gluster
16:15 red-lichtie No idea where to get the error number/level
16:18 LoudNoises you could try running      strace php yourscript.php  and checking the output
16:18 dbruhn__ it's telling you the error is the 19th line in the ftime.php file
16:18 LoudNoises but if you do that i'd recommend just doing the single time command
16:18 dbruhn__ go see what the code is saying, maybe you can reproduce the issue
16:19 dbruhn__ the fact that it's reporting that 1970 data stamp to me would suggest that maybe php could be calling a package you don't have installed or something along those line
16:19 dbruhn__ especially since the stat is working from the file system
16:20 red-lichtie dbruhn__: it works for ext4 but not for glusterfs
16:22 dbruhn__ have you looked that the output of a stat command from both sides to see what the difference might be that would cause it to not be happy?
16:25 red-lichtie dbruhn__: What do you mean? I've done "stat file" from shell and it works, but stat(file) fails, but only if the file is on a glusterfs mount
16:26 dbruhn__ maybe I misread something earlier. I though you said if you without involving own cloud stat the file from the mount point get accurate data back
16:27 red-lichtie http://fpaste.org/57243/69623138/
16:27 glusterbot Title: #57243 Fedora Project Pastebin (at fpaste.org)
16:27 red-lichtie dbruhn__: I'm setting up owncloud, but the issue is php not plating together with glusterfs 3.4.1
16:27 dbruhn__ yep
16:28 red-lichtie s/plating/playing/
16:28 glusterbot What red-lichtie meant to say was: dbruhn__: I'm setting up owncloud, but the issue is php not playing together with glusterfs 3.4.1
16:28 dbruhn__ run this for me quick "stat /var/www/owncloud/data/check.txt" and fpaste the output
16:28 dbruhn__ I am assuming data is your mount point?
16:29 red-lichtie yes, data is my mount
16:29 red-lichtie dbruhn__: http://fpaste.org/57227/13855664/
16:29 glusterbot Title: #57227 Fedora Project Pastebin (at fpaste.org)
16:31 red-lichtie dbruhn__: More info about setup http://fpaste.org/57216/55655211/
16:31 glusterbot Title: #57216 Fedora Project Pastebin (at fpaste.org)
16:31 LoudNoises red-lichtie: if you're feeling brave, i'd make a simple 1 line php script that calls  stat('/var/www/owncloud/data/check.txt'); and run that via strace
16:31 LoudNoises strace php stat.php
16:31 dbruhn__ So obviously the file file system is returning stat information
16:32 red-lichtie LoudNoises: installing strace
16:32 red-lichtie dbruhn__: yes
16:33 dbruhn__ You are going to need to debug the owncloud code to figure out why it's not getting the response it expects.
16:33 andreask joined #gluster
16:33 dbruhn__ looks like there is an #owncloud irc room that might be able to help you with the code, I am sure they would be interested in the error as well.
16:34 dbruhn__ LoudNoises is right though, mimic what the code is doing and see if you can see anything obviously.
16:34 red-lichtie dbruhn__: It osn't just owncloud, it is php in general for my version combination
16:35 red-lichtie "ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0xbede3bbc) = -1 ENOTTY (Inappropriate ioctl for device)" ??
16:36 red-lichtie write(2, "PHP Warning:  stat(): stat faile"..., 113PHP Warning:  stat(): stat failed for /var/www/owncloud/data/check.txt in /var/www/owncloud/simple.php on line 2
16:36 red-lichtie ) = 113
16:36 red-lichtie 113 is the errno ?
16:37 red-lichtie LoudNoises: fpaste.org/57245/85570207/
16:37 LoudNoises yea this seems like a permissions thing
16:37 dbruhn__ the stat of the file shows 644 on the file everything should be able to read it
16:37 red-lichtie But the same user can do it from command line
16:38 LoudNoises yea what about the directory itself
16:38 LoudNoises do you x on that?
16:38 red-lichtie do I x on it?
16:38 LoudNoises *do you have x (search) on that?
16:38 red-lichtie www-data is owner
16:39 red-lichtie As I said, it works from command line
16:39 red-lichtie 113 = EHOSTUNREACH ?
16:40 red-lichtie Or is that socket only
16:41 hchiramm__ joined #gluster
16:44 pdrakeweb joined #gluster
16:46 hybrid512 joined #gluster
16:47 LoudNoises are you nfs mounting your gluster?
16:47 LoudNoises or is it using fuse
16:48 red-lichtie LoudNoises: No idea to be honest, what ever fstab uses for "glusterfs"
16:49 red-lichtie fuse: bbb-1:gv-ocdata on /var/www/owncloud/data type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
16:49 dbruhn__ you are using the fuse client
16:50 LoudNoises you could perhaps try nfs, although that shouldn't matter at all
16:50 LoudNoises what's odd to me is that this works for you on 3.2
16:50 kaptk2 joined #gluster
16:50 red-lichtie can I set group_id and user_id in fstab as an option?
16:51 dbruhn__ Should be able to
16:51 Mo__ joined #gluster
16:53 red-lichtie dbruhn__: no effect
16:53 red-lichtie still comes up user_id=0
16:54 red-lichtie What I don't get though is that it works with the current rights as www-data (apache user) when stat/cat/ls are used in a shell
16:55 red-lichtie So I can't imagine that it is a rights issue
16:55 LoudNoises agreed
16:55 LoudNoises i mean the stat64 call = 0 looks like the call is succeeding, from that strace
16:56 LoudNoises see https://bugs.php.net/bug.php?id=48099
16:57 glusterbot Title: PHP :: Bug #48099 :: stat failed on a readable / writable file in NFS mount (at bugs.php.net)
16:57 LoudNoises down below in the comments they're seeing basically the same thing as you
16:57 red-lichtie And php does the same from command line and in apache
16:57 hagarth joined #gluster
16:57 red-lichtie If that was it, it would be a regression in 5.5.1
16:57 LoudNoises right - it's an old bug
16:57 nueces joined #gluster
17:01 LoudNoises you're running 64bit debian
17:01 LoudNoises ?
17:01 red-lichtie Linux bbb-1 3.8.13-bone30 #1 SMP Thu Nov 14 02:59:07 UTC 2013 armv7l GNU/Linux
17:01 red-lichtie arm
17:01 dbruhn__ what distro?
17:01 red-lichtie Debian
17:03 _br_ joined #gluster
17:04 vpshastry joined #gluster
17:05 dbruhn__ Anyone running IP over IB have a min?
17:07 al joined #gluster
17:13 vpshastry left #gluster
17:18 kanagaraj joined #gluster
17:23 _pol joined #gluster
17:27 plarsen joined #gluster
17:29 jbd1 joined #gluster
17:44 RedShift joined #gluster
17:55 davidbierce joined #gluster
17:57 johnbot11 joined #gluster
18:16 diegows joined #gluster
18:24 kobiyashi here is a strange question
18:24 kobiyashi if i have a 4 node 2x2 distr/rep.
18:24 kobiyashi can i connect a fuse client to any of the nodes?
18:24 kkeithley yes
18:24 dbruhn__ Yep
18:25 kobiyashi thought so
18:25 kobiyashi so how does it work if I have a client connect as NFS?
18:25 kobiyashi same?
18:25 kkeithley no
18:26 dbruhn__ When you connect with the FUSE client the initial server you connect to provides a manifest to the client that contains all of the bricks it can connect to.
18:26 dbruhn__ NFS provides a single connection point
18:26 kkeithley the gnfs server is itself a client. But vanilla NFS doesn't have the same functionality that gluster has.  pNFS will do that.
18:26 dbruhn__ the fuse client connects to all of them all the time so to speak
18:30 dbruhn__ kkeithley, can pNFS be used with gluster?
18:31 kkeithley nfs-ganesha will have gluster support, and it will do pNFS (eventually)
18:32 dbruhn__ awesome
18:34 kkeithley nfs-ganesha 2.0-RC-final is cooking. It has a glusterfs FSAL. The FSAL will need the gfapi that'll be in glusterfs-3.5. (At first blush a backport of gfapi _ought_ to be easy, maybe worth asking for).
18:35 kkeithley backport to 3.4.2
18:35 * kkeithley runs for cover
18:37 dbruhn__ hahahahahah
18:38 * kkeithley thinks it's worth doing, just to get it out there sooner
18:39 Technicool joined #gluster
18:39 dbruhn__ will nfs-ganesha support parallel writes to replication pairs like the fuse client dose today?
18:39 dbruhn__ s/dose/does/
18:39 glusterbot What dbruhn__ meant to say was: will nfs-ganesha support parallel writes to replication pairs like the fuse client does today?
18:44 kkeithley at the NFS level or the gluster level?  The ganesha.nfsd will do parallel writes via the gfapi-based FSAL to the underlying gluster servers. I don't know if pNFS clients will do the same thing to pNFS servers, but if it doesn't that would be disappointing. But just so we don't set the wrong expectations, our first releases of nfs-ganesha+glusterfs will probably only support NFSv4, not pNFS.
18:49 Technicool joined #gluster
18:55 kobiyashi can someone help me understand what this means in my gluster/brick mnt.log
18:55 kobiyashi W [marker-quota.c:2039:mq_inspect_directory_xattr] 0-devstatic-marker: cannot add a new contribution node
19:15 zerick joined #gluster
19:28 aliguori joined #gluster
19:28 _polto_ joined #gluster
19:28 _polto_ joined #gluster
19:36 lpabon joined #gluster
19:50 red-lichtie OK, I've tried all sorts. But as soon as I use a GlusterFS mount (3.4.1) together with php (5.5.1) the function stat(file) stops working
19:50 red-lichtie Is there alternative?
19:50 dbruhn__ can you upgrade php or downgrade gluster?
19:51 dbruhn__ Did I read earlier you weren't having the problem with 3.2?
19:54 _pol joined #gluster
20:11 _polto_ joined #gluster
20:11 _polto_ joined #gluster
20:12 red-lichtie Not sure if I can downgrade
20:12 red-lichtie Maybe build from source
20:12 red-lichtie I'll heck that out
20:12 red-lichtie s/heck/check/
20:12 glusterbot What red-lichtie meant to say was: I'll check that out
20:12 dbruhn__ why can't you install an earlier version?
20:13 Hau joined #gluster
20:14 Hau Hi
20:14 glusterbot Hau: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
20:15 Hau Has anyone had experience with geo-replication and throttling the bandwidth between locations?
20:17 red-lichtie dbruhn__: not sure of the dependencies, so going to try that now
20:24 nueces joined #gluster
20:24 jag3773 joined #gluster
20:26 nueces joined #gluster
20:29 avati red-lichtie: please post debug logs to the mailing list
20:29 avati some things to try:
20:29 avati - disable performance translators
20:29 avati - mount with --entry-timeout=0 --attribute-timeout=0
20:29 * Hau has set away! (auto away after idling [15 min]) [Log:ON] .gz.
20:30 avati post the results on the mailing list as well
20:33 avati left #gluster
20:33 avati joined #gluster
20:33 delhage joined #gluster
20:36 badone joined #gluster
20:45 red-lichtie dbruhn__: Downgraded to 3.2.7 and it works again
20:46 red-lichtie So it is not a php problem
20:46 LoudNoises does 3.3.2 still work?
20:46 red-lichtie I don't have 3.3.2
20:46 dbruhn__ There was a major change in the file system at 3.3, and 3.3.2 is the latest GA
20:46 dbruhn__ semiosis, where are the 3.3.2 debian packages?
20:48 red-lichtie dbruhn__: Debian have skipped 3.3 all together by the looks of it
20:48 red-lichtie See: http://ftp.debian.org/debian/pool/main/g/glusterfs/
20:48 glusterbot <http://goo.gl/YvAfVN> (at ftp.debian.org)
20:49 red-lichtie Now I have to pin the packages so they aren't upgraded next update
20:50 LoudNoises it's available for wheezy
20:50 LoudNoises what are you on?
20:50 LoudNoises http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.2/Debian/README
20:50 glusterbot <http://goo.gl/Llb7L> (at download.gluster.org)
20:52 red-lichtie LoudNoises: I'm on an arm machine
20:52 red-lichtie Well, the gluster is
20:52 LoudNoises oh right
20:53 LoudNoises is building from source an option?
20:53 red-lichtie I've just built mariadb (mysql++) from source, gluster should be easy compared to that :)
20:54 radez joined #gluster
20:54 LoudNoises lol true
20:54 LoudNoises http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.2/glusterfs-3.3.2.tar.gz
20:54 glusterbot <http://goo.gl/9bU5fu> (at download.gluster.org)
20:55 red-lichtie LoudNoises: I'll add ir as a source repo then build the debs from source
20:55 radez I've just installed gluster 3.4 on rhel 6.4 from download.gluster's yum repo. when I try and start glusterd I get 0-mem-pool: invalid argument
20:55 red-lichtie s/ ir / it /
20:55 glusterbot What red-lichtie meant to say was: LoudNoises: I'll add it as a source repo then build the debs from source
20:55 radez Google wasn't too helpful right off, anyone see the 0-mem-pool: invalid argument log msg?
20:56 _polto_ joined #gluster
21:04 dbruhn__ btw red-lichtie, how do you like owncloud? I was looking at it a while ago
21:05 kkeithley1 joined #gluster
21:06 wushudoin joined #gluster
21:09 red-lichtie I think it is brilliant, I have my contacts, calendars and files on it, not google or dropbox. My data is my own :)
21:11 red-lichtie Wanted the additional backup, so I have ucarp, haproxy, repcached, glusterfs, mariadb, galera and owncloud on a couple of BeagleBone Blacks
21:11 dbruhn__ I 100% understand why it's awesome, just wasn't sure how functional it was, and it seemed like they were trying to do a featured version for commercial and free version that was stripped down
21:12 red-lichtie It works a treat. I mainly use it directly out of Thunderbird/Lightning, Evolution and Android
21:12 red-lichtie I don't really use the web ui
21:12 dbruhn__ How many users?
21:12 red-lichtie Me and the better half
21:13 dbruhn__ ahh ok
21:13 red-lichtie A BeagleBone is a tiny thing
21:14 red-lichtie Running the micro cluster off dynamic dns and 2 powerlines in separate buildings
21:14 dbruhn__ yep, I was thinking about setting it up for a couple of my customers
21:14 red-lichtie It is really cool
21:14 dbruhn__ couple hundred users
21:14 dbruhn__ I am worried about how it will scale
21:14 red-lichtie It should be able to hande that easy
21:15 dbruhn__ these guys generate about 2500 images per day
21:15 red-lichtie Have a look at mariadb+galera for a true multi-master replicating db
21:15 dbruhn__ on top of all of their other data
21:15 red-lichtie and multi gluster nodes for the file load
21:16 red-lichtie The nice thing about galera is that it should resync upon rejoining the db cluster too
21:17 red-lichtie It is awesome for a £120 set up
21:21 red-lichtie gtg, thanks for the help!
21:21 LoudNoises 3.3.2 works?
21:21 red-lichtie I'll try 3.3 tomorrow
21:21 LoudNoises ahh, good luck
21:21 red-lichtie got to build it 1st
21:22 red-lichtie left #gluster
21:24 dbruhn__ apparently 3.2.7 was working but 3.4.1 wasn't
21:24 dbruhn__ he is going to try 3.3.2 tomorrow
21:39 glusterbot New news from newglusterbugs: [Bug 1033576] rm: cannot remove Directory not empty on path that should be clean already <http://goo.gl/tW3gtb>
22:03 _polto_ joined #gluster
22:19 andreask joined #gluster
22:32 brimstone joined #gluster
23:12 haritsu joined #gluster
23:13 StarBeast joined #gluster
23:59 gdubreui joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary