Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-03-07

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 theron joined #gluster
00:00 cjanbanan joined #gluster
00:01 theron joined #gluster
00:03 sjoeboo joined #gluster
00:08 andrewklau joined #gluster
00:29 tokik joined #gluster
00:34 Joe630 I have the dumbest question ever.
00:34 Joe630 Do I need nfs packages or does gluster act as the NFS server?
00:34 cfeller the latter
00:35 Joe630 Is there a service that needs to start up?  Do I need portmapper?
00:35 cfeller gluster provides a builtin NFS server
00:35 cfeller if you type "gluster volume status"
00:35 cfeller you should be able to see if the builtin NFS server is running
00:36 Joe630 checking things ousweet thanks
00:43 andrewklau joined #gluster
00:58 cp0k joined #gluster
01:07 vpshastry joined #gluster
01:14 sjoeboo joined #gluster
01:18 sroy_ joined #gluster
01:20 mattappe_ joined #gluster
01:22 JoeJulian ~nfs | Joe630
01:22 glusterbot Joe630: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
01:34 andrewklau joined #gluster
01:42 zapotah joined #gluster
01:42 zapotah joined #gluster
01:43 badone joined #gluster
02:06 ilbot3 joined #gluster
02:06 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:07 gmcwhistler joined #gluster
02:07 atrius` joined #gluster
02:08 primechuck joined #gluster
02:11 elyograg_ JoeJulian: i have an idea for your dirty xattr script, but my python-fu is not strong.  i would have it gather inode numbers as it crawls the filesystem and finds dirty files.  If it finds the same inum in a dirty file later, it would avoid printing out the filename. it would need a way to check .glusterfs last.  either it needs to have a known restriction that everything in the path you give must be on the same filesystem, or it would have to kno
02:12 badone joined #gluster
02:13 elyograg_ optionally, it could print out the filename when the inum is the same, but mark it so we'd know it's a duplicate.
02:13 ThatGraemeGuy joined #gluster
02:16 daMaestro joined #gluster
02:16 dcmbrown_ joined #gluster
02:17 haomaiwa_ joined #gluster
02:18 primechu_ joined #gluster
02:20 bharata-rao joined #gluster
02:21 JonnyNomad_ joined #gluster
02:22 RayS_ joined #gluster
02:22 marcoceppi_ joined #gluster
02:22 primechu_ joined #gluster
02:22 delhage_ joined #gluster
02:23 mkzero_ joined #gluster
02:23 solid_liq joined #gluster
02:23 solid_liq joined #gluster
02:24 sputnik13 joined #gluster
02:24 zapotah__ joined #gluster
02:24 zapotah__ joined #gluster
02:24 cyberbootje1 joined #gluster
02:25 flrichar joined #gluster
02:25 JordanHackworth joined #gluster
02:25 swat30_ joined #gluster
02:25 shapemaker joined #gluster
02:25 baoboa joined #gluster
02:26 DV__ joined #gluster
02:27 dusmant joined #gluster
02:27 wrcski joined #gluster
02:29 harish_ joined #gluster
02:31 prasanth joined #gluster
02:31 JonathanS joined #gluster
02:36 primechuck joined #gluster
02:59 mtanner_ joined #gluster
03:00 cyberbootje joined #gluster
03:11 jporterfield joined #gluster
03:15 glusterbot` joined #gluster
03:22 DBuzz joined #gluster
03:25 georgeh|workstat joined #gluster
03:26 overclk joined #gluster
03:27 rotbeard joined #gluster
03:30 Slasheri joined #gluster
03:30 Slasheri joined #gluster
03:40 mattappe_ joined #gluster
03:46 itisravi joined #gluster
03:50 leochill joined #gluster
03:59 badone joined #gluster
04:11 glusterbot New news from newglusterbugs: [Bug 1031328] Gluster man pages are out of date. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1031328>
04:16 hagarth joined #gluster
04:43 ppai joined #gluster
04:43 deepakcs joined #gluster
04:45 jporterfield joined #gluster
04:48 gdubreui joined #gluster
04:49 satheesh1 joined #gluster
05:00 ndarshan joined #gluster
05:11 RameshN joined #gluster
05:42 jporterfield joined #gluster
05:45 raghu joined #gluster
05:56 edward1 joined #gluster
06:14 jporterfield joined #gluster
06:26 Alex I'm curious - how does Gluster keep attrs like mtime in sync? I have an odd (repeated) issues where on different servers I see different mtimes for the same file. (Setup is two bricks, one on each server, each server also mounts localhost:/volumename).
06:26 Alex I'm wondering if it's just split brain, or if I'm missing something
06:42 jporterfield joined #gluster
06:42 glusterbot New news from newglusterbugs: [Bug 1073763] network.compression fails simple '--ioengine=sync' fio test <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073763>
06:42 meghanam joined #gluster
06:50 ngoswami joined #gluster
06:52 kevein joined #gluster
07:14 tokik joined #gluster
07:17 jtux joined #gluster
07:33 psharma joined #gluster
07:33 CheRi joined #gluster
07:52 mohankumar joined #gluster
07:52 kam270__ joined #gluster
07:52 rgustafs joined #gluster
07:52 jporterfield joined #gluster
07:52 bharata-rao joined #gluster
07:52 vimal joined #gluster
07:52 17SAAP0TV joined #gluster
07:52 nshaikh joined #gluster
07:52 jruggiero joined #gluster
07:52 Philambdo joined #gluster
07:52 satheesh joined #gluster
07:52 benjamin_____ joined #gluster
07:52 rjoseph joined #gluster
07:52 snehal joined #gluster
07:52 kris joined #gluster
07:52 vpshastry joined #gluster
07:52 bala joined #gluster
07:52 lalatenduM joined #gluster
07:52 kdhananjay joined #gluster
07:52 spandit joined #gluster
07:52 badone joined #gluster
07:52 haomaiwang joined #gluster
07:52 kanagaraj joined #gluster
07:52 jiffe98 joined #gluster
07:52 neoice_ joined #gluster
07:52 eryc joined #gluster
07:52 pdrakeweb joined #gluster
07:52 semiosis joined #gluster
07:52 m0zes joined #gluster
07:52 velladecin joined #gluster
07:52 tjikkun_work_ joined #gluster
07:52 asku joined #gluster
07:52 primusinterpares joined #gluster
07:52 johnmwilliams__ joined #gluster
07:52 larsks joined #gluster
07:52 NuxRo joined #gluster
07:52 abyss^ joined #gluster
07:52 fsimonce joined #gluster
07:52 morse joined #gluster
07:52 Kins joined #gluster
07:52 Peanut joined #gluster
07:52 ninkotech_ joined #gluster
07:52 fyxim joined #gluster
07:52 sticky_afk joined #gluster
07:52 Joe630 joined #gluster
07:52 edong23 joined #gluster
07:52 dcmbrown_ joined #gluster
07:52 marcoceppi_ joined #gluster
07:52 qdk_ joined #gluster
07:52 cedric__1 joined #gluster
07:52 zapotah_1 joined #gluster
07:52 ccha joined #gluster
07:52 wgao joined #gluster
07:52 purpleidea joined #gluster
07:52 RobertLaptop joined #gluster
07:52 JordanHackworth joined #gluster
07:52 flrichar joined #gluster
07:54 [o__o] joined #gluster
07:56 ctria joined #gluster
07:56 ctria joined #gluster
07:57 ekuric joined #gluster
07:58 mohankumar joined #gluster
08:01 ricky-ti1 joined #gluster
08:03 ravindran joined #gluster
08:15 cjanbanan joined #gluster
08:18 jtux joined #gluster
08:20 sun^^^ joined #gluster
08:22 keytab joined #gluster
08:29 eseyman joined #gluster
08:29 mohankumar joined #gluster
08:42 andreask joined #gluster
08:42 yinyin joined #gluster
08:43 hybrid512 joined #gluster
08:46 askb_ joined #gluster
09:11 cfeller joined #gluster
09:11 Norky joined #gluster
09:13 gdubreui joined #gluster
09:14 irctc720 joined #gluster
09:29 haomaiwa_ joined #gluster
09:29 al joined #gluster
09:30 lalatenduM joined #gluster
09:32 jporterfield joined #gluster
09:42 ekobox joined #gluster
09:43 muhh joined #gluster
09:45 al joined #gluster
09:48 cjanbanan joined #gluster
09:55 andreask joined #gluster
10:06 mohankumar joined #gluster
10:08 rahulcs joined #gluster
10:20 YazzY joined #gluster
10:20 YazzY joined #gluster
10:24 ndarshan joined #gluster
10:26 Slash joined #gluster
10:31 Slash joined #gluster
10:33 DV__ joined #gluster
10:41 ngoswami joined #gluster
10:43 prasanth joined #gluster
10:47 glusterbot New news from newglusterbugs: [Bug 1073844] geo-replication fails with OSError when setting remote xtime <https://bugzilla.redhat.co​m/show_bug.cgi?id=1073844>
10:54 harish_ joined #gluster
11:00 rastar joined #gluster
11:01 vimal joined #gluster
11:04 vimal joined #gluster
11:04 lalatenduM joined #gluster
11:09 vimal joined #gluster
11:13 andrewklau joined #gluster
11:15 cjanbanan joined #gluster
11:15 vimal joined #gluster
11:20 vimal joined #gluster
11:22 mkzero joined #gluster
11:22 hybrid512 joined #gluster
11:27 mZbC1 joined #gluster
11:28 mZbC1 hi there
11:28 Norky hello
11:28 glusterbot Norky: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
11:28 * Norky spanks glusterbot
11:29 mZbC1 I'm having an issue, on Fedora20, gluster 3.4.2, volume with 2.7TB (500GB used), etx4. Any directory with more than 19 files (actually from my tests 19-24 files), I can do an ls, without getting I/O errors. Any ideas?
11:30 Norky "can" or "can't"?
11:30 Norky if "can", you dont' really seem to be describing a problem
11:31 madhu joined #gluster
11:31 mZbC1 sorry, I can't do an ls without erros
11:31 mZbC1 *errors
11:32 rgustafs joined #gluster
11:32 Norky you might be experiencing this problem: http://joejulian.name/blog/gluste​rfs-bit-by-ext4-structure-change/
11:32 glusterbot Title: GlusterFS bit by ext4 structure change (at joejulian.name)
11:33 mZbC1 Norky: let me check
11:36 mZbC1 Norky: does not apply, gluster 3.4.2
11:37 mZbC1 in my case, the readdir() goes into a infinite loop, but it outputs immediately I/O errors
11:43 lflores joined #gluster
11:43 Norky I suggest trying the same test with an XFS-based brick, XFS is generally recommended
11:43 Norky if nothing else it should narrow the problem to one within glsuter or the underlying local FS
11:44 Norky also, how is the client connecting?
11:44 harish_ joined #gluster
11:44 lflores the client is a Fedora 20 connecting through gluster-fuse
11:45 lflores if connected with NFS, there is no problem
11:45 Norky so everything shoudl be on the same, recent version of glusterfs
11:45 lflores in the ext4 underlying fs there is no problem
11:46 lflores updated fedora 20 on client and server, gluster 3.4.2 on both
11:46 lflores problem persists after full reboot
11:46 lflores problems persists if we mount from another server
11:46 lflores problem goes way if we mount with nfs
11:47 lflores the vole ha these properties: diagnostics.client-log-level: DEBUG
11:47 lflores diagnostics.brick-log-level: DEBUG
11:47 lflores but there are no error messages on logs
11:48 Norky what are you seeing in the gluster logs?
11:48 mZbC1 Norky: just to put you up to speed.. the problem I described, it's lflores problem...! :)
11:48 Norky yeah, I guessed you were working together
11:48 lflores [2014-03-07 11:48:28.667518] D [io-threads.c:325:iot_schedule] 0-vmsurano-io-threads: OPENDIR scheduled as fast fop
11:48 lflores [2014-03-07 11:48:28.668486] D [io-threads.c:325:iot_schedule] 0-vmsurano-io-threads: READDIRP scheduled as fast fop
11:48 lflores [2014-03-07 11:48:28.670050] D [io-threads.c:325:iot_schedule] 0-vmsurano-io-threads: READDIRP scheduled as fast fop
11:48 lflores [2014-03-07 11:48:28.671191] D [io-threads.c:325:iot_schedule] 0-vmsurano-io-threads: LOOKUP scheduled as fast fop
11:48 lflores [2014-03-07 11:48:30.213806] D [io-threads.c:325:iot_schedule] 0-vmsurano-io-threads: READ scheduled as slow fop
11:48 lflores [2014-03-07 11:48:30.222790] D [io-threads.c:325:iot_schedule] 0-vmsurano-io-threads: READ scheduled as slow fop
11:48 lflores [2014-03-07 11:48:30.223819] D [io-threads.c:325:iot_schedule] 0-vmsurano-io-threads: READ scheduled as slow fop
11:48 lflores [2014-03-07 11:48:30.245707] D [io-threads.c:325:iot_schedule] 0-vmsurano-io-threads: READ scheduled as slow fop
11:49 Norky might want to use a pastebin for that :)
11:49 mZbC1 lflores: you should use gist for that
11:49 mZbC1 :)
11:49 lflores ok, I got that
11:49 Norky or fpaste - as it's included with fedora
11:49 Norky tail -n 50 /var/log/glusterfs/MOUNTPOINT.log | fpaste
11:49 lflores the "ls" command on client generate a few more lines, but they are all like that
11:50 Norky for example
11:50 lflores some have the STATFS call instead
11:51 ndevos hmm, I've seen that on my home system too - I'm not sure if the number of files is always 19+ when the problem happens though - nfs works for me too
11:52 lflores http://pastebin.com/wKE1sExt
11:52 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
11:52 lflores too late
11:52 ndevos for me, tcpdump/wireshark showed that the GlusterFS protocol does not have any errors, so I suspect an issue in the glusterfs-fuse part, just not sure where yet
11:53 lflores http://ur1.ca/gs4pr
11:53 glusterbot Title: #83316 Fedora Project Pastebin (at ur1.ca)
11:55 tokik joined #gluster
11:56 lflores oh, we have the problem on 2 dirs (a real and a test one), the number of files that triggers the problem is different ...
11:56 ndevos lflores: could you file a bug for that? include the exact versions and architecture of glusterfs from both the server- and the client-side
11:56 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
11:57 lflores one is 24 the other is 20
11:57 * ndevos uses a x86_64 laptop and a arm7hl (32-bit) server
11:57 Norky hmm, nothing springs to mind for me
11:58 Norky I might see if I can replicate that after lunch
11:58 lflores ok, I'll fill the full detailed bug report, thx
11:58 ndevos lflores: thanks, can you pass the bug# here when you're done?
11:59 Norky just for giggles, dump a directory listing on fpaste or whereever, please?
12:00 Norky that is, of a directory where you're having the problem, generated over NFS if need be
12:00 tokik_ joined #gluster
12:00 zapotah joined #gluster
12:00 liquidat joined #gluster
12:02 ndevos Norky: http://fpaste.org/83318/19368413/ the /lan is automounted over glusterfs, and /net is nfs
12:02 glusterbot Title: #83318 Fedora Project Pastebin (at fpaste.org)
12:03 lflores http://ur1.ca/gs4s2
12:03 glusterbot Title: #83322 Fedora Project Pastebin (at ur1.ca)
12:03 ndevos Norky: if you know the file/dir, access (like read/write/stat/..) works over glusterfs, just listing the direntries fails
12:03 itisravi_ joined #gluster
12:03 bewees joined #gluster
12:04 saurabh joined #gluster
12:04 Norky okay, so it's when the readdirplus() data reaches some threshold size
12:04 lflores just the listing fails, as you can see from : http://ur1.ca/gs4s2 we can delete a dir or file, but we can't list
12:04 glusterbot Title: #83322 Fedora Project Pastebin (at ur1.ca)
12:05 Norky lflores, you said it varies around 20-24 files
12:05 lflores yeap
12:05 Norky coudl you try with longer filenames?
12:05 ndevos maybe a threshold, or fuse doesnt like some inode numbers, or ...
12:05 lflores sure
12:05 Norky thisisalongfilenameXX etc
12:05 Norky and see if you reach the threshold earlier
12:06 lflores you are correct
12:08 lflores http://ur1.ca/gs4t9,
12:08 glusterbot Title: ur1 Generator (at ur1.ca)
12:08 Norky so you can create fewer filenames of the form thisisalongfilenameXX before the problem appears?
12:08 Norky comma at the end of the URL breaks it
12:08 lflores with longer filename we have problems with 10 files
12:08 Norky no worries, stripped it out :)
12:09 Norky just so you know for next time
12:09 lflores it was too late when I saw it ...
12:09 Norky aye, no problem
12:13 tokik joined #gluster
12:13 rwheeler joined #gluster
12:14 lflores (brb, I'll give more news after the bug fill ...)
12:15 bewees Hi, which protocol do you recommend for backups on a NAS? I read about nfs (which has windows and linux clients available) and I also read about ndmp. For ndmp I didn't find an open source server implementation for linux though. I see there's Symantec, but I don't want commercial software.
12:16 bewees (My "NAS" is actually an arm computer (raspberry pi alike) with an esata port and hdd on a 1 gbit/s switch within my private network)
12:19 tokik_ joined #gluster
12:19 rfortier1 joined #gluster
12:20 Norky erm, given that this is the glusterfs channel, your question might be better asked elsewhere
12:21 andreask joined #gluster
12:22 Norky one could use gluster & Samba for that, so you;d have all three protocols (NFS, SMB, native GlusterFS) available to your clients (Windows does not yet have a GlusterFS client)
12:23 Norky but as I said, your question is rather out of the scope of this channel :)
12:26 bewees Thanks for the information, I try to find another channel which is more related to my question
12:27 samppah bewees: are you familiar with ndmp? i haven't heard of that before but sounds intresting
12:29 bewees samppah I'm not familiar with it and I didn't find really a lot open source projecs about that. Symantec uses it for its Netbackups programm from which you can make backups for linux, unix and windows systems
12:30 samppah bewees: yeh, wikipedia mentions Bacula and Amanda.. not sure if you need commercial support for ndmp on those
12:34 bewees samppah Right, but I think ndmp is only available in Bacula Enterprise which contains also proprietary code and is not free
12:34 bewees I check out Amanda, thanks
12:41 tokik joined #gluster
12:44 rahulcs joined #gluster
12:45 tokik_ joined #gluster
12:52 benjamin_____ joined #gluster
12:53 hagarth joined #gluster
13:02 marianogg9 joined #gluster
13:02 marianogg9 hey guys
13:02 marianogg9 anyone using aws+gluster+php implementation? (no ngnix)
13:03 marianogg9 sorry nginx
13:10 bewees joined #gluster
13:12 ravindran joined #gluster
13:19 smithyuk1 joined #gluster
13:28 rahulcs joined #gluster
13:30 jtux joined #gluster
13:30 vpshastry joined #gluster
13:38 roman joined #gluster
13:41 getup- joined #gluster
13:44 jmarley joined #gluster
13:44 jmarley joined #gluster
13:46 rgustafs joined #gluster
13:48 Guest76774 Hi guys, please help me >_< with gluster, I trying to install few instances on amazon using puppet where 1 server with 1 brick and few clients
13:48 kanagaraj joined #gluster
13:48 Guest76774 clients using the same way to configure and mount glusterfs
13:48 Guest76774 and some of them failed!
13:48 Guest76774 0-glusterfs: connection to 10.146.177.186:24007 failed (Connection timed out)
13:51 Guest76774 0-glusterfsd-mgmt: failed to connect with remote-host: Transport endpoint is not connected
14:00 rfortier joined #gluster
14:00 Guest76774 here is my logs:  http://pastie.org/8889769    http://pastie.org/8889775
14:00 glusterbot Title: #8889769 - Pastie (at pastie.org)
14:07 rakkaus_ joined #gluster
14:07 Guest76774 hey please some one save my life
14:07 Norky I have little experience with "cloud" services like AWS
14:08 Norky check with nmap that the clients can all 'see' an open port on TCP 10.146.177.186:24007
14:08 Guest76774 I tried to disable firewall at all
14:08 Guest76774 same
14:09 zapotah joined #gluster
14:09 zapotah joined #gluster
14:09 mattapperson joined #gluster
14:09 Guest76774 same image for gluster client same packages installed
14:09 Guest76774 and some of them can't connect to the gluster server
14:10 sroy joined #gluster
14:10 Norky as I said, check that clients see an open port
14:14 Guest76774 okay let me check
14:19 Philambdo joined #gluster
14:21 Guest76774 so here is output where client cant connect
14:21 Guest76774 root@ip-10-216-129-24 log]# nmap 10.146.177.186
14:21 Guest76774 Starting Nmap 5.51 ( http://nmap.org ) at 2014-03-07 14:16 UTC
14:21 Guest76774 Nmap scan report for ip-10-146-177-186.ec2.internal (10.146.177.186)
14:21 Guest76774 Host is up (0.00040s latency).
14:21 Guest76774 Not shown: 846 closed ports, 148 filtered ports
14:21 Guest76774 PORT      STATE SERVICE
14:21 glusterbot Title: Nmap - Free Security Scanner For Network Exploration & Security Audits. (at nmap.org)
14:21 Guest76774 22/tcp    open  ssh
14:21 Guest76774 2049/tcp  open  nfs
14:21 Guest76774 49152/tcp open  unknown
14:21 Guest76774 49153/tcp open  unknown
14:21 Guest76774 49154/tcp open  unknown
14:21 Guest76774 49155/tcp open  unknown
14:21 Guest76774 Nmap done: 1 IP address (1 host up) scanned in 1.53 seconds
14:21 bennyturns joined #gluster
14:22 Guest76774 and same where client was connected
14:22 Guest76774 nmap ip-10-254-42-84.us-west-2.compute.internal -Pn
14:22 Guest76774 Starting Nmap 5.51 ( http://nmap.org ) at 2014-03-07 14:20 UTC
14:22 Guest76774 Nmap scan report for ip-10-254-42-84.us-west-2.compute.internal (10.254.42.84)
14:22 Guest76774 Host is up (0.0020s latency).
14:22 Guest76774 Not shown: 944 filtered ports, 54 closed ports
14:22 Guest76774 PORT      STATE SERVICE
14:22 glusterbot Title: Nmap - Free Security Scanner For Network Exploration & Security Audits. (at nmap.org)
14:22 Guest76774 22/tcp    open  ssh
14:22 Guest76774 49152/tcp open  unknown
14:22 sticky_afk joined #gluster
14:22 Guest76774 Nmap done: 1 IP address (1 host up) scanned in 7.54 seconds
14:25 gmcwhistler joined #gluster
14:25 Guest76774 also with the port
14:25 jmarley joined #gluster
14:25 jmarley joined #gluster
14:25 Guest76774 [root@ip-10-216-129-24 log]# nmap 10.146.177.186 -p 24007
14:25 Guest76774 Starting Nmap 5.51 ( http://nmap.org ) at 2014-03-07 14:23 UTC
14:25 Guest76774 Nmap scan report for ip-10-146-177-186.ec2.internal (10.146.177.186)
14:25 Guest76774 Host is up (0.00035s latency).
14:25 Guest76774 PORT      STATE SERVICE
14:25 Guest76774 24007/tcp open  unknown
14:25 glusterbot Title: Nmap - Free Security Scanner For Network Exploration & Security Audits. (at nmap.org)
14:25 Guest76774 Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
14:26 edong23 dude
14:26 edong23 are you retarded?
14:26 edong23 stop blasting the channel
14:26 edong23 use pastie.org
14:26 Guest76774 sorry :(
14:26 edong23 or dpaste
14:26 Guest76774 ok I will
14:27 primechuck joined #gluster
14:28 Guest76774 also I can telnet on this port 24007
14:31 theron_ joined #gluster
14:33 Guest76774 but client can't connect...
14:34 Norky check the client logs, /var/log/glusterfs/VOLNAME.log
14:35 Guest76774 here is my logs:  http://pastie.org/8889769    http://pastie.org/8889775
14:35 glusterbot Title: #8889769 - Pastie (at pastie.org)
14:35 Guest76774 standart and with TRACE
14:36 Norky okay, unmount a currentl;y working client and see if it you can mount it again
14:37 mohankumar joined #gluster
14:40 japuzzo joined #gluster
14:50 Guest76774 done, here is okay 3 times
14:50 Guest76774 umount & ,mount again
14:51 jobewan joined #gluster
14:54 failshell joined #gluster
14:55 vpshastry joined #gluster
14:58 neurodrone joined #gluster
15:01 vimal joined #gluster
15:07 seapasulli joined #gluster
15:08 benjamin_____ joined #gluster
15:12 Guest76774 btw on my VMs
15:12 Guest76774 every thing is ok
15:16 lflores_ joined #gluster
15:25 bugs_ joined #gluster
15:28 andreask joined #gluster
15:31 lmickh joined #gluster
15:37 abyss^ joined #gluster
15:42 calum_ joined #gluster
15:47 capncrunch4me joined #gluster
15:52 jruggier1 joined #gluster
15:53 NuxRo joined #gluster
15:56 jruggier1 left #gluster
15:58 johnmark coming up: Integration Nation on #gluster-meeting
15:58 johnmark live video at http://www.gluster.org/2014/03/glu​ster-spotlight-integration-nation/
16:06 kaptk2 joined #gluster
16:06 zapotah_ joined #gluster
16:06 zapotah_ joined #gluster
16:07 abyss^_ joined #gluster
16:09 larsks|alt joined #gluster
16:10 stefanha joined #gluster
16:15 irctc720 joined #gluster
16:16 Norky Guest76774, enable debug logging on the volume, unmount and then mount on both a working client and a non-working one and compare the two client logs to see where they diverge
16:19 kkeithley1 joined #gluster
16:21 lalatenduM joined #gluster
16:23 smithyuk1 Hi guys, having a little trouble with gluster rebalance. I'm on version 3.4.2; have upgraded from 3.3.0 on all (3) of our sites (using the same method). Now one of the sites rebalances fine (even data distribution) as you would expect. In our other two sites data will hardly move at all, the disk space after is barely different whereas it was very visible in our first site. The only difference that I can see between them is that the two sites that won'
16:23 smithyuk1 t rebalance have 2 new (el6) machines (the rest are el5) which have multiple bricks in the same cluster (which our old servers don't have)
16:24 smithyuk1 I have done a gluster reset on settings on all sites to ensure nothing is different but no joy, any ideas?
16:29 seapasulli joined #gluster
16:31 cjanbanan joined #gluster
16:33 kris joined #gluster
16:35 irctc720 joined #gluster
16:40 kris joined #gluster
16:40 JoeJulian smithyuk1: First place to check is the logs. There should be one related to rebalance on the server where you started the task.
16:43 JoeJulian elyograg: wrt python script, that's a good idea. I might take a look at improving that over the weekend. It was written pre .glusterfs directory so there's quite a few enhancements that are possible.
16:43 smithyuk1 Nothing looks incorrect in the logs, it says fixing layout for everything and then migrate data, then migrate data complete
16:43 smithyuk1 but the disk size doesn't seem to be changing or it's not moving things; let me check one of the items from the log
16:44 smithyuk1 It doesn't say where it's moving it to in the logs? or am I just missing it
16:46 smithyuk1 I spoke to you a while ago about every server in the cluster showing up as localhost, if I do a peer status then it fixes it but only until I restart glusterd. I wonder if it's trying to rebalance the files only to itself?
16:48 glusterbot New news from newglusterbugs: [Bug 1074023] list dir with more than N files results in Input/output error <https://bugzilla.redhat.co​m/show_bug.cgi?id=1074023>
16:49 kris joined #gluster
16:49 elyograg smithyuk1: I see lots of localhost in a rebalance status too.  I found a bugid for it, now I can't seem to locate it.
16:50 smithyuk1 I followed the steps in there to fix it but the fix seems temporary
16:50 elyograg IIRC, the bug says that restarting gluster is actually what causes the bug.
16:50 lflores hello, I create this (https://bugzilla.redhat.co​m/show_bug.cgi?id=1074023) bug, regarding the list dir problem
16:50 glusterbot Bug 1074023: high, unspecified, ---, csaba, NEW , list dir with more than N files results in Input/output error
16:50 smithyuk1 Oh brilliant ^^
16:50 lflores (opened really ...)
16:51 JoeJulian lflores: I was just reading the scrollback on that. Interesting bug.
16:52 Oneiroi joined #gluster
16:53 lflores well, it's really serious ... can't create more than a 20 or something files on a dir ... but thanks to Norky, we already know it's size related ..
16:53 elyograg found it.  bug 956188
16:53 glusterbot Bug https://bugzilla.redhat.com​:443/show_bug.cgi?id=956188 medium, high, ---, vraman, ASSIGNED , DHT - rebalance - 'gluster volume rebalance <volname> status' shows 2 entry ( 2 rows) for one host
16:54 lflores somehow, it seems the dir listing response is size bounded ...
16:55 lflores I find really strange such a serious bug to show up now ...
16:55 semiosis :O
16:55 satheesh joined #gluster
16:56 smithyuk1 elyograg: cheers, seems like my rebalance thinks it's okay but isn't actually doing anything
16:57 JoeJulian lflores: Have you checked the brick logs?
16:57 lflores it's all fine
16:58 lflores notice that if I mount with nfs I can list the directories
16:58 GabrieleV joined #gluster
16:59 lflores I increased the brick log level and client log level to DEBUG, and nothing strange shows up
16:59 JoeJulian So, no error on the client when that fails, and no error on the brick either, right?
16:59 Norky lflores, you should probably attach the client and brick logs to that bug report
16:59 lflores correct
16:59 lflores I tried on another server, another client, all have the same error
16:59 lflores another volume, replicated volume, distributed
17:00 marcoceppi joined #gluster
17:00 marcoceppi joined #gluster
17:00 Norky have you tried with an XFS brick yet?
17:00 lflores the bottom line, brick filesystem shows everything ok, I can create and delete files inside the folder, I just can't list the folder, if is mounted with glusterfs
17:01 lflores if I remove 1 file from there, I can list it
17:01 lflores if I create files with longer names, I can list a few more files
17:01 Norky a "strace ls /mnt" would probably help the devs too
17:01 lflores (a few less files!)
17:02 Norky bah, trying to replicate this but the vagrant boxes I'm using are centos
17:02 Norky or there's one with fedora18
17:03 lflores there should be gluster 3.4.2 specific
17:03 lflores it should
17:03 ndevos lflores: the servers that have your bricks, are they also armv7hl, or are they 'normal' x86 systems? 32-bit, or 64?
17:04 ndevos I have armv7hl as servers, and x86_64 as client, xfs on the bricks
17:04 ndevos and yes, it happens to me to when mounting over glusterfs, not when mounting over nfs
17:04 lflores x86 and x86_64
17:04 Norky ndevos, you're seeing the same error, right?
17:04 ndevos Norky: yeah
17:05 ndevos lflores: ah, in that case, I am suspecting some 32 <-> 64 bit incomatibility
17:05 Norky lflores, which is x86? client or server?
17:05 ndevos *incompatibility
17:05 Norky yeah, what ndevos said
17:05 lflores added the strace output
17:05 ndevos flrichar: x86 server, x86_64 client, or the other way around?
17:05 lflores only tried x86 in server
17:05 Oneiroi joined #gluster
17:06 lflores (but tried with x86_64 server too)
17:06 ndevos sorry flrichar, that was meant for lflores
17:06 flrichar no problemo ;O)
17:06 JoeJulian What if you fuse mount from the server?
17:06 lflores also tried client
17:07 lflores happens in both, x86, x86_64 in all client/server combinations
17:07 ndevos JoeJulian: ah, good idea - works for me (mount localhost over glusterfs and 'ls')
17:07 lflores (using ext4 brick fs, but it seams to me the error is too far aways from the brick fs to matter in this case)
17:08 Norky lflores, yes, you're probably right, but it's always worth testing
17:08 lflores fuse mount on server: same error
17:08 lflores SELinux disabled
17:09 Norky however as ndevos is using xfs on his bricks and seeing, we think, the same problem, that's good enough
17:11 jobewan joined #gluster
17:11 hagarth joined #gluster
17:11 JoeJulian REally!?! That is completely unexpected.
17:12 JoeJulian Same kernel version on clients and servers?
17:12 Norky openat(AT_FDCWD, "test/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
17:12 Norky getdents(3, 0x180cc90, 32768)           = -1 EIO (Input/output error) is there
17:12 JoeJulian (besides arch, of course)
17:12 lflores same kernel
17:12 Norky lflores is on FEdora 20 with glusterfs 3.4.2 throughout his environment I believe
17:13 lflores correct, fully updated
17:13 swat30 joined #gluster
17:13 ndevos same here
17:13 ultrabizweb joined #gluster
17:13 ndevos well, my workstation/laptio is a RHEL6
17:13 JoeJulian Norky: I asked because occasionally kernel patch versions lag between architectures.
17:14 Norky lflores, to confirm, you did "mount -t glusterfs localhost:/VOLNAME /mnt ; ls /mnt/test" on the server, correct?
17:15 Lflores2 joined #gluster
17:15 lflores correct
17:15 JoeJulian Is that EIO error on the client or the brick?
17:15 Norky i.e. making the server a client of itself
17:15 lflores on client
17:16 sijis joined #gluster
17:16 lflores yes, I mounted the exported glusterfs on server
17:16 Norky that EIO was from a strace of ls run on the client - see https://bugzilla.redhat.com​/show_bug.cgi?id=1074023#c3
17:16 glusterbot Bug 1074023: high, unspecified, ---, csaba, NEW , list dir with more than N files results in Input/output error
17:16 ndevos JoeJulian: I've added the strace summary to the bug now, just press F5 ;)
17:16 lflores the error is consistent, it always shows up, on all combinations (arch,server/client) on Fedora 20 with updated gluster
17:17 JoeJulian I expect no errors, but strace the brick(s) and cause the problem.
17:18 JoeJulian I'm running the same version, xfs bricks, centos 6.5 servers and Fedora 20 clients without that problem.
17:18 ndevos JoeJulian: the strace was from the 'ls', the getdents() syscall returns it, we'll probably need systemtap to get to the details of why getdents() returns -EIO
17:19 JoeJulian We could check by stracing the brick and/or wireshark to see if the error is coming from the server.
17:19 Norky hmm, buffer size of 15 bits
17:19 JoeJulian I suspect it's not, but that'd be something easy to isolate.
17:20 ndevos I checked the traffic on my side, no errors there - also confirmed with nfs working correctly (same conversation with the bricks)
17:20 Norky strace the working NFS mount and check the getdents() call for comparison?
17:20 ndevos getdents() will return a list of dentries when nfs is used - getdents() is the internals of readdir()
17:21 Norky or, better still, ls a FUSE-mounted directory that contains fewer (ideally one) files
17:21 Lflores2 (lflores here) I can strace and wireshark in a hours
17:21 ndevos getdents() is in the vfs, that is done before the fuse layer, strace does not see beyond getdents()
17:22 Norky yeah, I'm struck by the fact it works with a 'smaller' directory, i.e. with fewer files
17:22 ndevos Lflores2: tcpdump/wireshark do not show any errors for me
17:23 sijis i'm getting an error mounting the share on a remote client. when i try from the gluster server, it works. the error: http://pastebin.com/jyyKGxbF
17:23 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:23 Norky I'm wondering if it works when getdents() is using a smaller buffer size
17:23 JoeJulian sijis: "failed to connect with remote-host" Firewall?
17:24 sijis JoeJulian: i can ping and firewalls are disabled on both
17:24 ndevos Norky: it is probably something in the glusterfs-fuse <-> fuse-kernel-module part, but glusterfs-fuse with log-level=TRACE does not help, next step would be debugging with systemtap (or gdb)
17:24 Matthaeus joined #gluster
17:25 JoeJulian It's returning "No route to host" which is an ICMP message. If the server wasn't listening, you would get a different error. Since you can ping, that means that something is actively returning that icmp message. You have a firewall issue.
17:25 Norky fair enough, this is at the periphery of my understanding - I was curious if it were related to that returned struct being over a certain threshold size
17:26 Lflores2 joined #gluster
17:26 JoeJulian ndevos: Could you confirm that a volume on your 64 bit box doesn't have that problem?
17:27 Matthaeus joined #gluster
17:27 ndevos JoeJulian: I only have a 64-bit client
17:27 JoeJulian but you could make a 64 bit server, right?
17:27 Norky sure, if it were, as you say, one would need to look into internal debugging, and that's beyond my ken :)
17:27 JoeJulian Oh, i do still have a 32 bit box around... Let's see if that fails on a 32 bit intel...
17:28 Lflores2 I confirm that I have the problem on 64 bit
17:28 JoeJulian bah!
17:28 ndevos Norky: it is well possible a mismatch of sizes somewhere... but in most cases the 32-bit system (my server) would have a smaller size/buffer than the client (64-bit) - its strange
17:29 ndevos Lflores2: 32-bit client, 64-bit server?
17:30 Lflores2 yes, but I will recheck
17:30 bchilds joined #gluster
17:30 JoeJulian Hmm, my f20 box is still running 3.11.10...
17:30 Lflores2 but I'm moving now
17:31 Lflores2 I have 3.13.5
17:31 Lflores2 latest
17:33 bchilds hey guys. libgfapi question-- is namespace consistency guaranteed with the glfs_mknod(..) call? i'm seeing a case where i create a file on one peer and open it immediately after and it not showing up on another node.. its not 100% of the time (almost impossible to reproduce on VMs) , but quite often on a fast cluster
17:33 Norky Lflores2, sorry we coudlnt' immediately help, thank you for an interesting problem at least :)
17:33 Norky cheerio
17:33 Lflores2 thx for the support!
17:33 theron joined #gluster
17:33 JoeJulian I'm upgrading my F20 box to see if the problem is kernel related.
17:34 sputnik13 joined #gluster
17:34 ndevos JoeJulian: I doubt that, my laptop runs RHEL6 (2.6.32-431.5.1.el6.x86_64)
17:34 ndk joined #gluster
17:36 ndevos bchilds: do you run into problems with that, or is it just a request to understand things better?
17:36 bchilds well i'm running into problems with the application (which depends on very strict namespace consistency) but trying to understand too if this is considered proper behavior
17:36 ndevos bchilds: there are several xlators that could have something to do with that, there is open-behind and write-behind, these two cache some operations on the client-side
17:37 JoeJulian Then what's different between your failures and my success?
17:39 vpshastry joined #gluster
17:39 sijis JoeJulian: i rebooted the client system. i'm not seeing any more 'route to host' i'm seeing this now: http://fpaste.org/83435/13942139/
17:39 glusterbot Title: #83435 Fedora Project Pastebin (at fpaste.org)
17:39 vpshastry left #gluster
17:42 bchilds ndevos: is there something I can pass to glfs_mknod that will disable the translators and force strict namespace consistency across all peers?
17:42 zapotah joined #gluster
17:42 zapotah joined #gluster
17:43 ndevos bchilds: you can disable those translators with 'gluster volume set $VOLUME performance.write-behind disable' - I guess open-behind works similar
17:43 bchilds ok perfect
17:44 ndevos bchilds: alternatively you can copy the .vol file to your client, modify it and pass the filename to the glfs_init() (or whatever call contains the server/volume string)
17:45 elyograg my heal info is now clean.  that took forever to manually fix everything.  running one more pass of my modified 'dirty xattrs' script across all 32 bricks so catch any stragglers, but I think I'm about ready to fire up that rebalance again.
17:45 JoeJulian sijis: How about /var/log/glusterfs/etc-glusterfs-glusterd.vol.log from dil-vm-gfs-01
17:45 ndevos bchilds: the client gets its .vol file with something similar to this: gluster system:: getspec $VOLUME
17:45 aixsyd_ JoeJulian: I figured out what my overall issue was with those getfattr and setfattr thingys. I'm using ZFSonLinux under Gluster - and ZFS was causing corrupted symlinks and when I setfattr to all 0's, I was creating split-brains. Screw ZFS - going back to XFS.
17:46 JoeJulian elyograg: I would do a find on the client mount first, just to make sure that every file has had a lookup()
17:47 Lflores2 joined #gluster
17:47 elyograg is a find good enough, (looking for something like -type s), or would I need to actuall do a stat on what it finds?
17:47 elyograg the -type s is just to be sure it doesn't actually have any non-error output. :)
17:47 * ndevos wishes you all a good weekend, cya!
17:48 elyograg strace doesn't seem to have stat() calls.
17:48 JoeJulian I've always been successful with just a "find >/dev/null". find has to do a lookup and stat just to determine if the dirent is a directory that it needs to add to the crawl.
17:49 elyograg i see newfstatat
17:49 JoeJulian It's the fuse lookup() call that's important.
17:50 zerick joined #gluster
17:50 elyograg I don't see that either.  or is it buried deeper in stuff that can't be seen with strace?
17:50 JoeJulian The only reason we use stat to trigger a self-heal is because it's a fairly thin function call to trigger that lookup.
17:51 JoeJulian Right, it's not a function that's called by the application.
17:51 sijis JoeJulian: here's are few lines from that log file: http://fpaste.org/83438/42146801/
17:51 glusterbot Title: #83438 Fedora Project Pastebin (at fpaste.org)
17:51 sijis not sure if you need more than that
17:52 JoeJulian sijis: Looks like you probably have a version mismatch.
17:52 JoeJulian ~latest | sijis
17:52 glusterbot sijis: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
17:53 sijis JoeJulian: how can you tell from that its like a mismatch?
17:54 JoeJulian I've seen that "Auth too weak" from that before.
17:54 sijis the versions have to match?
17:55 stefanha How can I temporarily disable io-cache, read-ahead, or any other xlator for testing?  Is there something like gluster myvol set xlator.write-behind off?
17:55 JoeJulian Since 3.3.x, it's not quite as critical, but it's (of course) strongly recommended. Prior to 3.3, the minor versions were incompatible with each other.
17:56 sijis JoeJulian: ahh.. server is running 3.2.7, client is 3.4.0 ;/
17:57 stefanha (Perhaps stop the volume and hack the .vol file manually or is there a command to do this?)
17:59 stefanha There is a VM running using a file in this volume, so I'd rather disable the xlator without stopping the volume...
18:03 nshaikh joined #gluster
18:06 navid__ joined #gluster
18:09 aixsyd_ Oy.... I thought XFS would be easier than ZFS...
18:10 aixsyd_ I go to create an image, and its pre-allocating the entire VM image.. and times out my hypervisor waiting for it. *greaaaaaaaat*
18:11 JoeJulian stefanha: You can, of course, use the "gluster volume set" commands to change those, but that changes the entire volume. If you just want to change it for one test client, then I'd probably do what you suggest and mount a hacked vol file.
18:12 stefanha JoeJulian: I'm okay with changing for the entire volume but I don't know the set syntax to disable these xlators
18:12 stefanha JoeJulian: Any idea where to find it?
18:12 stefanha JoeJulian: Ideally it would really disable the xlator and not just set a cache size to 0.
18:13 stefanha JoeJulian: (because I'm paranoid that the xlators are messing up requests)
18:13 aixsyd_ Anyone have any idea why, when I try to make a sparce image file, glusterfsd and "flush" take up 100% I/O, but dont actually do anything?
18:13 aixsyd_ *sparse
18:14 daMaestro joined #gluster
18:16 aixsyd_ This is crazy - I cant move any of my VMs to gluster now because of this !?!
18:17 diegows joined #gluster
18:17 JoeJulian How big are the images?
18:17 aixsyd_ 50-500GB
18:17 aixsyd_ but theyre sparse. maybe 10GB used?
18:18 aixsyd_ gluster is allocating the entire 500GB first
18:18 JoeJulian I have no clue how (or if) sparse files are handled through fuse.
18:18 aixsyd_ and that takes time, and the hypervisor is like, "u wot m8?" and times out after 30 secs
18:19 glusterbot New news from newglusterbugs: [Bug 1065656] "make install" doesn't honour --prefix for mount.glusterfs <https://bugzilla.redhat.co​m/show_bug.cgi?id=1065656>
18:19 aixsyd_ if I create a new /raw image files (aka, 100% new) and tell it its a 3TB image, it creates a correct 52KB file. its 100% sparse. takes no time.
18:19 aixsyd_ *.raw
18:21 aixsyd_ i wonder if i can just scp the image from the client from one mounted cluster to another...
18:21 aixsyd_ er, via the client.
18:21 JoeJulian I don't see why not.
18:22 Mo__ joined #gluster
18:23 aixsyd_ That worked!
18:23 aixsyd_ And went about 10x faster than via the hypervisor
18:23 aixsyd_ ....thats strange.
18:23 aixsyd_ but the VM boots up and works.
18:23 cjanbanan joined #gluster
18:26 marianogg9 left #gluster
18:40 lalatenduM joined #gluster
18:46 B21956 joined #gluster
18:46 bala joined #gluster
18:49 glusterbot New news from newglusterbugs: [Bug 1074045] Geo-replication doesn't work on EL5, so rpm packaging of it should be disabled <https://bugzilla.redhat.co​m/show_bug.cgi?id=1074045>
18:54 jclift aixsyd_: There's definitely mention in gluster git log history about handling sparse files, so that's potentially a bug
18:54 jclift aixsyd_: Would you be ok to create a new Bugzilla ticket about it, so it can be looked at?
18:55 JoeJulian stefanha: "gluster volume set help" Look at the performance.* options.
18:58 zaitcev joined #gluster
19:00 elyograg I got this in alarm email from my monitoring system.  Should it concern me?  It's from the machine where I mount things with NFS, which has no bricks.  http://fpaste.org/83453/42187471/
19:00 glusterbot Title: #83453 Fedora Project Pastebin (at fpaste.org)
19:01 Philambdo joined #gluster
19:02 elyograg the log timestamp was not quite half an hour ago.  Well after I fixed my heal info, and during a fill 'find' on a fuse mount.
19:02 elyograg s/fill/full/
19:02 glusterbot What elyograg meant to say was: the log timestamp was not quite half an hour ago.  Well after I fixed my heal info, and during a full 'find' on a fuse mount.
19:02 zapotah joined #gluster
19:02 zapotah joined #gluster
19:04 bala joined #gluster
19:05 NuxRo joined #gluster
19:08 capncrunch4me I'm new to Gluster and decided to use it over Ceph...primarily because I like the simplicity in Gluster over Ceph
19:08 capncrunch4me I have a question, though
19:09 stefanha JoeJulian: Thanks, I ended up restarting the volume and it helped - the I/O issues went away when I disabled some of the performance xlators.
19:09 capncrunch4me can you separate the journals on ssd and the bricks on spinners?
19:09 JoeJulian There are no journals yet.
19:09 capncrunch4me by default my os (Centos) is installed on SSD and I was mounting the spinners on btrfs
19:09 capncrunch4me is there any caching that can be done?
19:10 JoeJulian Now that I've said that... I need to check and see if that made it into 3.5...
19:11 aixsyd_ JoeJulian: is there any correlation of the number of network interfaces you and and the number of default threads glusterfsd spawns when doing a file transfer?
19:11 aixsyd_ *you have and
19:11 aixsyd_ I have 4x gig nics, and its spawning 4x glusterfsd's
19:12 capncrunch4me JoeJulian: so there is no "buffering" whatsoever in Gluster?
19:12 elyograg capncrunch4me: if you are talking about journals for the underlying filsystems (xfs, ext4, etc) ... that's an operating system detail that is completely independent of gluster.
19:12 JoeJulian No, there's not.
19:13 capncrunch4me elyograg: So, I am running tuned ext4 for my OS drive...but the bricks will run on top of BTRFS
19:13 capncrunch4me the bricks are hybrid sshd spinners
19:13 JoeJulian aixsyd_: I wouldn't think so.
19:13 aixsyd_ Cause when I pull from a server with only two nics, only two threads spawn.
19:13 elyograg still an independent implementation detail.
19:14 capncrunch4me is there any sort of "tuning" that I would want to do independent of just basic btrfs tuning?
19:14 capncrunch4me and caching would basically all be done by linux's filesystem cache?
19:15 elyograg for xfs, the inode size should be at least 512.  I use 1024.  If btrfs supports changing that, you'll want to do so.
19:15 JoeJulian aixsyd_: Are they nics all up and have addresses?
19:15 aixsyd_ 802.3AD LACP bound nics, so yes
19:16 kkeithley1 joined #gluster
19:16 lalatenduM joined #gluster
19:23 JoeJulian aixsyd_: Looks like it spawns on a per-transaction basis so I assume that must be related. I don't know a lot about how 802.3AD actually functions.
19:24 wrale Do replicas offer any performance benefit for reads?
19:25 JoeJulian wrale: They can, sure.
19:25 lflores joined #gluster
19:26 wrale JoeJulian: thanks.  needed a reality check..
19:27 JoeJulian 1m clients all wanting to stream the latest American Idol, you'll need replicas just to provide the bandwidth.
19:27 wrale Makes sense.  One of things that drew me to GlusterFS years back was the Pandora story ..
19:28 JoeJulian Heh, pandora is more of a dht user. Their brilliant method of ensuring that their replication isn't overloaded is that they never actually give you what you ask for.
19:29 wrale lol.. good to know. i'm wrong a lot :)
19:29 wrale back to fio test.. thanks
19:29 wrale s/test/tests
19:29 wrale case in point
19:30 JoeJulian You get 1m people going to pandora wanting to hear the latest Katy Perry song you get 1m people listening to 100k different songs.
19:30 wrale JoeJulian: that's rather ingenious, too.. until spotify
19:31 Matthaeus joined #gluster
19:32 JoeJulian Ooh, and Oracle event I might actually go to....
19:33 JoeJulian ... but only because they're providing lunch and it's at Morton's Steakhouse.
19:33 elyograg I only use free products from Oracle.  Notably MySQL and Java.
19:34 JoeJulian I try (and miserably fail) to avoid Java, and I use MariaDB.
19:35 elyograg I'm in charge of Solr at work.  Can't avoid it.  Most of our websites are servlets, too.
19:35 JoeJulian I might give them crap about their unbreakable kernel.
19:36 zapotah joined #gluster
19:36 zapotah joined #gluster
19:36 elyograg I've actually found it to be a useful language for my own Solr related code.
19:37 JoeJulian I've just had way too many occasions where I have to use 7 for this and 6 for that because neither one works reliably for some application.
19:39 elyograg i've found that programs written for 6 usually work without a problem on 7.  although if you're trying to use a non-{oracle,sun} JVM, I've heard things are a bit of a nightmare.
19:40 JoeJulian I've had issues with elasticsearch or logstash, I don't remember which, on oracle/sun jvm 7
19:50 thegreenhundred joined #gluster
19:51 theron joined #gluster
19:52 theron joined #gluster
19:53 theron_ joined #gluster
19:55 elyograg seems odd that elasticsearch would have a problem.  it uses Lucene, and Solr (which works great in Java 7, but is made for Java 6) is part of the Lucene code base.  I suppose there could be problems in the ES code itself.
20:12 semiosis java <3
20:24 ekobox joined #gluster
20:28 mtanner joined #gluster
20:32 elyograg repeating this.  still a problem, got new error messages similar.  I got this in alarm email from my monitoring system.  Should it concern me?  It's from the machine where I mount things with NFS, which has no bricks.  http://fpaste.org/83453/42187471/
20:32 glusterbot Title: #83453 Fedora Project Pastebin (at fpaste.org)
20:36 elyograg I'm also seeing this repeating over and over in the nfs.log: http://fpaste.org/83488/39422456/
20:36 glusterbot Title: #83488 Fedora Project Pastebin (at fpaste.org)
20:40 JoeJulian I assume 3f629e64-ccf0-4d9b-b7ba-1782f8abeb2b must be a directory.
20:40 elyograg interesting.  JMP/jmpphotos doesn't exist.  JMP/jmphotos does (one p).
20:41 rwheeler joined #gluster
20:42 elyograg offtopic.  reviews on this are awesome: http://www.amazon.com/Tengwar-Script-Prayer-Comfor​t-Stainless/dp/B00HZV4PMQ/ref=cm_cr_pr_product_top
20:50 elyograg I wonder if maybe I have some corruption in the confusing symlink infrastructure.
20:51 JoeJulian I expect so.
20:52 JoeJulian Nothing below .glusterfs/*/*/ should be a directory.
20:52 tdasilva joined #gluster
20:52 elyograg will 'heal full' be helpful at all?
20:53 JoeJulian Not if that previous statement is incorrect.
20:53 marcoceppi_ joined #gluster
20:54 elyograg there are some ... .glusterfs/00/9c and so on.  i assume you mean nothing below ./glusterfs/XX/XX, right?
20:57 andrewklau joined #gluster
21:02 asku joined #gluster
21:04 elyograg find -O3 d00v00/mdfs/.glusterfs -type d | grep -v "\.glusterfs/../..$"
21:05 elyograg i still get .glusterfs/XX so I have some idea of how far along it is.
21:05 elyograg very slow.
21:20 Elico joined #gluster
21:21 Elico I Was wondering about "VMware Gluster Virtual Storage Appliance" is it free? what are the benefits?
21:22 semiosis @gsa
21:22 glusterbot semiosis: I do not know about 'gsa', but I do know about these similar topics: 'git', 'gmc'
21:22 semiosis @appliance
21:22 glusterbot semiosis: I do not know about 'appliance', but I do know about these similar topics: 'appliances'
21:22 semiosis @appliances
21:22 glusterbot semiosis: The Red Hat Storage Software Appliance for bare-metal: http://www.redhat.com/produc​ts/storage/storage-software/ and the Virtual Storage Appliance for virtualized & cloud servers: http://www.redhat.com/produ​cts/storage/virtual-storage
21:22 semiosis hmm, thats not vmware
21:22 semiosis Elico: where did you hear about this?  got a link?
21:23 Elico yes it's in the gluster web site.
21:23 semiosis looks like this is ancient, from the Gluster 3.1 days
21:23 Elico http://www.gluster.org/community/documenta​tion/index.php/Gluster_3.1:_Licensing_Glus​ter_Virtual_Storage_Appliance_for_VMware
21:23 semiosis well maybe not ancient, but really old
21:23 glusterbot Title: Gluster 3.1: Licensing Gluster Virtual Storage Appliance for VMware - GlusterDocumentation (at www.gluster.org)
21:24 semiosis probably doesn't exist anymore
21:24 semiosis i haven't heard of it in years
21:24 Elico yes but is there something free these days? or I just need to install it?
21:24 semiosis glusterfs is free software
21:24 Elico All CLI ? web interfaces?
21:24 _dist joined #gluster
21:24 Elico yes this is why I was wondering when I have seen RH stuff on it semiosis
21:24 semiosis most people install it using the package manager on their favorite linux distro
21:25 semiosis there are official package repositories for rpms and debs
21:25 semiosis see ,,(repo)
21:25 glusterbot I do not know about 'repo', but I do know about these similar topics: 'git repo', 'ppa repo', 'repos', 'repository', 'yum repo'
21:25 semiosis see ,,(yum repo)
21:25 glusterbot The official community glusterfs packages for RHEL (including CentOS, SL, etc) are available at http://download.gluster.org/pub/gluster/glusterfs/. The official community glusterfs packages for Fedora 18 and later are in the Fedora yum updates (or updates-testing) repository.
21:25 semiosis see also ,,(ppa)
21:25 glusterbot The official glusterfs packages for Ubuntu are available here: 3.4 stable: http://goo.gl/u33hy -- 3.5 QA: http://goo.gl/Odj95k -- introducing QEMU with GlusterFS 3.4 support: http://goo.gl/7I8WN4
21:25 samppah you can also use ovirt to manage glusterfs
21:25 Elico what is the stable one?
21:25 semiosis and there's an apt repo for debian at ,,(latest)
21:25 glusterbot The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
21:25 samppah but cli is really simple
21:25 Elico yes but I need it for hope usage..
21:25 semiosis stable is version 3.4.2 currently
21:26 Elico Then this ..http://download.gluster.org/pub/gluster/glu​sterfs/3.4/3.4.2/CentOS/glusterfs-epel.repo
21:26 semiosis if you are using centos that is probably what you want
21:27 Elico yes this is it.
21:27 Elico Can I replicate a live FS?
21:27 Elico like mail server?
21:28 Elico my current mail server needs transfer from one server to another. naturally I would use rsync with down time :\
21:29 Elico thanks until now anyway..
21:33 Elico @nfs
21:33 glusterbot Elico: To mount via nfs, most distros require the options, tcp,vers=3 -- Also an rpc port mapper (like rpcbind in EL distributions) should be running on the server, and the kernel nfs server (nfsd) should be disabled
21:33 samppah Elico:  are you looking for something like live migration of VM or do you just have one machine that you need to move from one host to another one?
21:37 _dist now that I have about 20vms live on a gluster replicate I'm going to write a script so I can tell when the files are healthy and present it via html. If anyone cares to read my idea, let me know if you see problems/me doing something in a dumb way https://dpaste.de/7mkV/raw
21:52 dcmbrown_ Is the gluster-fuse client in 3.4 still as slow as it is in 3.3? On a local pair of VM clients I get roughly 2.5MB/s writing to a gluster vol. To a non-gluster vol I'm getting 279MB/s.  Similarily on AWS (under 3.3) I'm getting traffic levels of 800Mb/s, write speeds to non-gluster vols of 60MB/s and to gluster vols only 25MB/s. The few tuneable options don't seem to have a whole lot of effect. With NFS the local VMs get 53MB/s.
21:52 semiosis dcmbrown_: what are you using to test this?
21:53 semiosis also, please ,,(pasteinfo)
21:53 glusterbot Please paste the output of "gluster volume info" to http://fpaste.org or http://dpaste.org then paste the link that's generated here.
21:53 dcmbrown_ I was looking for that nfs+gluster post someone had made getting NFS performance while also getting the redundancy of the gluster fuse client but the post seems to have disappeared.
21:53 _dist dcmbrown: in 3.4 I typically get 60% - 80% of native using fuse (in my tests using dd & bonnie for linux and crystal disk mark for windows). I'd recommend testing native fuse writes before testing inside a vm on the fuse mount as well
21:54 dcmbrown_ Mostly dd from /dev/zero to test files. iperf for network speeds.
21:54 semiosis dcmbrown_: yeah, we dont really recommend doing localhost nfs mounts.  there can be deadlock issues
21:54 Elico samppah: migrating a real machine..
21:54 Elico vm it's kind of
21:55 Elico "easy" to migrate. A real running server it's another story since it's home usage.
21:55 semiosis dcmbrown_: dd is not a great benchmark tool.  but if you must use it, my two tips are: use bs=1M, and run several dd in parallel, looking at the aggregate performance
21:56 _dist dcmbrown_: I prefer to use the tool "stress" it lets you simulate multiple io usages at the same time http://linux.die.net/man/1/stress
21:56 glusterbot Title: stress(1): impose load on/stress test systems - Linux man page (at linux.die.net)
21:56 _dist fdatasync or oflag=dsync on dd can help make it more realistic as well
21:56 dcmbrown_ yeah I've been doing a dd with bs=1024 and count=2000000.
21:57 semiosis dcmbrown_: also note, if you are using replication, the fuse client writes in parallel to all replicas, so for replica 2, you'll get about 1/2 of the available network throughput
21:57 dcmbrown_ none in parallel yet however.
21:57 semiosis try a larger block size.
21:57 semiosis 1k is tiny
21:58 semiosis 1M is a good bs
21:59 dcmbrown_ well read performance is the bigger factor.  We're using replication on AWS but getting some performance delays on page loads (hence also using smaller block sizes)
21:59 semiosis php?
21:59 semiosis there are several things you can do to tune your php app and/or your webserver
21:59 semiosis see ,,(php)
21:59 glusterbot (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
21:59 glusterbot --fopen-keep-cache
22:01 theron joined #gluster
22:02 dcmbrown_ yeah I believe I've read that article.  Was a little concerned with the need to restart apache for cache clearing.
22:03 dcmbrown_ I'll try the other mount options however.
22:04 semiosis dcmbrown_: modern php apps & frameworks use autoloading which, when combined with APC (even with stat checking still enabled) should give pretty good performance
22:04 semiosis also you should optimize your php include path
22:05 semiosis so that the most likely dir is first
22:05 semiosis bbiab, coffee time
22:08 Elico how would I use the client to mount a glusterfs ?
22:09 Elico how I would need the fuse client....
22:09 _dist Elico: if you have the glusterfs client installed you can do "mount.glusterfs" or "mount -t glusterfs" and then point it to host:/volume
22:09 _dist Elico: are you in a situation where you are want to mount the volume through fuse on an OS that doesn't support glusterfs-client ?
22:09 DV__ joined #gluster
22:10 Elico centos ..
22:10 _dist should work fine then
22:10 Elico ok thanks.
22:11 capncrunch4me anyone running btrfs in production for brick filesystem?
22:11 _dist Elico: http://download.gluster.org/pub/gl​uster/glusterfs/3.4/3.4.2/CentOS/ <-- once you do your yum install then you can run mount -t glusterfs or put it in your fstab
22:11 glusterbot Title: Index of /pub/gluster/glusterfs/3.4/3.4.2/CentOS (at download.gluster.org)
22:11 Elico _dist: thanks this is what I have used..
22:12 al joined #gluster
22:16 elyograg capncrunch4me: I would like to.  I think some are.  The problem with it is that if you want stability, you run a fairly bleeding edge kernel.  CentOS/RHEL is not bleeding edge, but it DOES have really good support for Dell and its hardware management software, plus it is generally very stable overall, at least when you're not trying btrfs. :)
22:16 capncrunch4me I'm running 3.15
22:17 capncrunch4me from mainline
22:17 capncrunch4me on Centos
22:17 elyograg 3.4.2 on centos here.
22:18 Elico I created and deleted a volume but glusterfs states that there is stil data about the volume how would I solve it?
22:18 _dist capncrunch4me: I'm running gluster on zfs, which is "similar" to brtfs, rhel7 I will be stock 3.10 I believe, I didn't even know there was 3.15 out yet, though 3.14 was the newest
22:19 _dist Elico: there is probably a .glusterfs directory in there, but honestly sometimes I've just had to rmdir and mkdir
22:19 capncrunch4me _dist: sorry 3.13
22:19 capncrunch4me 3.13.5-1.el6.elrepo.x86_64
22:19 elyograg Elico: if it's the error message I think it is, just paste it in the channel and glusterbot will give you a URL where you can read about how to remove the xattrs.
22:20 elyograg let's see.      or a prefix of it is already part of a volume
22:20 glusterbot elyograg: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
22:20 elyograg there it is.
22:21 Elico thanks
22:25 theron joined #gluster
22:27 nage joined #gluster
22:27 nage joined #gluster
22:27 JoeJulian for me: file a bug
22:27 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
22:28 irctc720 joined #gluster
22:39 YazzY joined #gluster
22:39 YazzY joined #gluster
22:50 vpshastry joined #gluster
22:57 zapotah joined #gluster
22:57 zapotah joined #gluster
23:00 ekotelnikova joined #gluster
23:05 Oneiroi joined #gluster
23:07 wrale Would it be unwise to use the latest nightly version of 3.5 on F19, instead of 3.5b3 (which precedes the latest nightly in date of release)?  Looking at: http://download.gluster.org/pub/gluster/glus​terfs/nightly/glusterfs-3.5/fedora-19-x86_64​/glusterfs-3.5.20140222.482434d-1.autobuild/
23:07 glusterbot Title: Index of /pub/gluster/glusterfs/nightly/glus​terfs-3.5/fedora-19-x86_64/glusterf​s-3.5.20140222.482434d-1.autobuild (at download.gluster.org)
23:07 wrale Will I be able to upgrade to 3.5ga, once it's here?
23:08 wrale I'm hoping the primary differences would be bug fixes.
23:08 JoeJulian I would test the beta release instead of the nightly
23:08 JoeJulian And yes, bug fixes.
23:08 wrale Right on.. will use that..
23:08 wrale thanks
23:12 cedric___ joined #gluster
23:20 glusterbot New news from newglusterbugs: [Bug 1074095] Build errors on EL5 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1074095>
23:22 kam270_ joined #gluster
23:30 ccha hi JoeJulian. do you know if there are any documentation about xattrop folder ?
23:30 JoeJulian None that I've ever seen.
23:30 JoeJulian ... and I haven't documented it myself.
23:32 ccha ok
23:35 neurodrone joined #gluster
23:44 kam270_ joined #gluster
23:45 neurodrone joined #gluster
23:51 badone_ joined #gluster
23:58 cjanbanan joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary