Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-06-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 cholcombe joined #gluster
00:09 gildub joined #gluster
00:28 kripto_ joined #gluster
00:33 jrm16020 joined #gluster
00:35 vmallika joined #gluster
00:58 suliba joined #gluster
01:06 mribeirodantas joined #gluster
01:18 cholcombe joined #gluster
01:20 Pupeno_ joined #gluster
01:24 TheCthulhu2 joined #gluster
01:27 nangthang joined #gluster
01:47 akay1 are you around anoopcs?
01:49 jmarley joined #gluster
01:52 cholcombe joined #gluster
01:52 InitX joined #gluster
01:54 _joel left #gluster
01:56 akay1 anyone seen a problem with the trashcan where deleting a file doesnt show any file on the .trashcan mount point (but it does create the appropriate folder) and the file exists on the bricks but as 0 size?
02:22 XpineX joined #gluster
02:31 xzpeter joined #gluster
02:35 xzpeter Hello, everyone. I would like to deploy glusterfs cluster. Which distribution do you suggest? I really care much about the performance and stability of the cluster, and I am not sure whether there are some platform-dependent code inside gluster? (e.g., since gluster belongs to RH now, should I use CentOS/RHEL to make gluster perform the best?)
03:11 anoopcs akay1, Hey Can you explain me the scenario?
03:14 overclk joined #gluster
03:17 anoopcs akay1, What was thew volume configuration? Replicate, Distribute or Distribute-Replicate?
03:22 kshlm joined #gluster
03:26 rejy joined #gluster
03:26 kdhananjay joined #gluster
03:27 kdhananjay joined #gluster
03:33 shubhendu joined #gluster
03:36 meghanam joined #gluster
03:40 ramkrsna joined #gluster
03:40 TheSeven joined #gluster
03:42 overclk joined #gluster
03:53 bharata-rao joined #gluster
04:01 itisravi joined #gluster
04:05 glusterbot News from newglusterbugs: [Bug 1236933] Ganesha volume export failed <https://bugzilla.redhat.com/show_bug.cgi?id=1236933>
04:17 atinm joined #gluster
04:19 dusmant joined #gluster
04:25 hchiramm_home joined #gluster
04:27 gildub joined #gluster
04:32 kshlm joined #gluster
04:32 sripathi joined #gluster
04:35 glusterbot News from newglusterbugs: [Bug 1236945] glusterfsd crashed while rebalance and self-heal were in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1236945>
04:44 jiffin joined #gluster
04:48 ramteid joined #gluster
04:51 akay1 hi anoopcs are you still around?
04:51 anoopcs akay1, yup
04:51 sakshi joined #gluster
04:51 akay1 ok great :) distribute-replicate
04:52 akay1 Number of Bricks: 7 x 2 = 14
04:52 anil joined #gluster
04:53 anoopcs akay1, Was there rebalance process running in  parallel?
04:53 raghu joined #gluster
04:53 akay1 not no rebalance
04:53 akay1 nope*
04:54 gem joined #gluster
04:56 anoopcs akay1, Can you perform a complete path based lookup on mount i.e, ls <path to file in trashcan>
04:58 anoopcs akay1, From the mount, suppose the file deleted was dir/file, then do ls .trashcan/dir/file*
04:59 akay1 Stale file handle
05:00 akay1 folder permissions in the mount showing up as: ??????????   ? ?    ?        ?            ?
05:02 anoopcs akay1, That's weird..
05:02 akay1 tell me about it :)
05:03 jiffin akay1: can u do one more-thing , create a file at the mount point(not inside a directory) and delete that file
05:04 jiffin and check whether file is present in trashcan
05:05 akay1 okay... the file is there and it's the correct size too
05:07 anoopcs akay1, Create a new file(possibly non-empty) under your previous directory hierarchy and try to delete it and see whether it worked?
05:08 anoopcs akay1, If you get the same behavior(size=0 or permission bits as ???) again then I suspect a bug.
05:08 vimal joined #gluster
05:10 vikumar joined #gluster
05:10 akay1 nope, the file isnt there
05:10 akay1 i assume its on one of the bricks as 0 size
05:11 jiffin akay1: can u mention the file size, please?
05:12 maveric_amitc_ joined #gluster
05:12 akay1 ok i found the file on a brick, and it's the correct size
05:13 akay1 its only 67 bytes... i just created a new text file
05:13 jiffin akay1: k
05:14 anoopcs akay1, If so, do the same path based look-up I mentioned before on mount and check
05:14 akay1 same thing - stale file handle
05:14 jiffin akay1: can u give output of the volume status command
05:15 jiffin akay1: all the bricks are online , right?
05:15 spandit joined #gluster
05:15 akay1 yep everything is online except for one NFS server
05:16 jiffin akay1: don't worry about nfs-server , unless u are not using it
05:17 akay1 nup not using it
05:17 vmallika joined #gluster
05:18 akay1 http://pastebin.com/PVms5ej8
05:18 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
05:19 ndarshan joined #gluster
05:20 akay1 http://fpaste.org/237887/
05:21 hagarth joined #gluster
05:21 anoopcs akay1, We will try to reproduce the issue. Can you paste the volume info and the brick logs on which the file was present after deletion?
05:23 raghu joined #gluster
05:23 ashiq joined #gluster
05:24 Manikandan joined #gluster
05:30 rafi joined #gluster
05:31 Bhaskarakiran joined #gluster
05:34 harish_ joined #gluster
05:34 ashiq joined #gluster
05:36 jiffin akay1: are u around?
05:36 smohan joined #gluster
05:41 akay1 yep, just putting that together now
05:41 akay1 just noticed a lot of errors setting xattrs
05:43 meghanam joined #gluster
05:47 akay1 http://fpaste.org/237893/56432631/
05:48 akay1 jiffin, anoopcs - please see above link
05:49 DV joined #gluster
05:51 deepakcs joined #gluster
05:52 pppp joined #gluster
05:57 soumya joined #gluster
05:59 anoopcs akay1, Those error lines doesn't seem to have something related to trash. But 'Operation not supported' error is something related to the underlying file system. I hope that it supports xattrs. Anyway I will try reproducing the issue with your configured options for the volume and get back to you soon.
06:02 Manikandan joined #gluster
06:04 atalur joined #gluster
06:08 nsoffer joined #gluster
06:12 ramkrsna joined #gluster
06:12 nbalacha joined #gluster
06:19 akay1 all bricks running XFS
06:20 akay1 anoopcs, im also seeing a lot of this: [2015-06-30 05:16:26.788492] E [posix.c:204:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /data/gv0/brick3/brick/.trashcan/vault01
06:20 akay1 [2015-06-30 05:16:26.788556] E [MSGID: 115050] [server-rpc-fops.c:151:server_lookup_cbk] 0-gv0-server: 225403: LOOKUP /.trashcan/vault01 (00000000-0000-0000-0000-000000000005/vault01) ==> (No data available) [No data available]
06:22 akay1 thanks anoopcs, ill try to check back in here as much as i can
06:23 jtux joined #gluster
06:23 anoopcs akay1, Ok.
06:23 rgustafs joined #gluster
06:23 karnan joined #gluster
06:35 spalai joined #gluster
06:40 pdrakeweb joined #gluster
06:44 Bhaskarakiran joined #gluster
06:46 aravindavk joined #gluster
06:48 soumya joined #gluster
06:49 maveric_amitc_ joined #gluster
07:00 marbu joined #gluster
07:00 al joined #gluster
07:00 lkoranda_ joined #gluster
07:01 [Enrico] joined #gluster
07:01 raghu joined #gluster
07:04 Trefex joined #gluster
07:04 ramkrsna joined #gluster
07:04 ramkrsna joined #gluster
07:05 mbukatov joined #gluster
07:08 [Enrico] joined #gluster
07:10 lkoranda joined #gluster
07:13 kotreshhr joined #gluster
07:16 ws2k3 joined #gluster
07:17 spalai left #gluster
07:25 hildgrim joined #gluster
07:26 nishanth joined #gluster
07:28 hildgrim If I have instanceA <—> instanceB in a cluster can I start a georeplication to instanceC, which not part of the cluster?
07:28 hildgrim I've been trying to do something like volume geo-replication instanceA:/export/ instanceC:/export/georeplication start
07:29 hildgrim but I can't get it to work
07:30 hildgrim This is from instanceC that I'm trying to start the geo-replicaiton
07:37 hildgrim I guess my question is; can I start a geo-replication from the instance where I want to geo-replicate to?
07:44 MrAbaddon joined #gluster
07:50 fsimonce joined #gluster
07:58 Slashman joined #gluster
07:58 soumya joined #gluster
07:58 jcastill1 joined #gluster
08:02 lkoranda_ joined #gluster
08:02 marbu joined #gluster
08:04 jcastillo joined #gluster
08:06 mbukatov joined #gluster
08:12 lkoranda joined #gluster
08:14 [Enrico] joined #gluster
08:18 curratore joined #gluster
08:23 harish_ joined #gluster
08:27 cyberbootje joined #gluster
08:28 ws2k3 joined #gluster
08:30 NTQ joined #gluster
08:34 kdhananjay joined #gluster
08:35 rjoseph joined #gluster
08:38 sysconfig joined #gluster
08:46 ctria joined #gluster
08:51 hildgrim Actually, I might just want to create a snapshot.
08:57 spalai joined #gluster
09:05 kaushal_ joined #gluster
09:06 glusterbot News from resolvedglusterbugs: [Bug 1010153] glusterd: If a server is already part of another cluster and User tries to add it using command 'gluster peer probe <hostname/ip>' ; It is failing but not giving reason for failure <https://bugzilla.redhat.com/show_bug.cgi?id=1010153>
09:11 vovcia hi o/
09:11 vovcia what is INODELK ?
09:12 kokopelli joined #gluster
09:20 atinm joined #gluster
09:43 Pupeno joined #gluster
09:45 mator ndevos, how do i reopen bug in bugzilla.redhat.com ? https://bugzilla.redhat.com/show_bug.cgi?id=1222065
09:45 glusterbot Bug 1222065: high, high, ---, kparthas, CLOSED CURRENTRELEASE, GlusterD fills the logs when the NFS-server is disabled
09:46 ndevos mator: you can change the status of the bug to NEW at the bottom of the page?
09:47 mator nope =(
09:47 atinm mator, you should login to the bugzilla?
09:47 ndevos mator: are you still hitting that bug with glusterfs-3.7.2? the exact same problem?
09:48 mator atinm, i always logged in
09:48 mator ndevos, yeah
09:48 mator btw we need to join all nfs logs bugs to one
09:48 mator there's at least this list :
09:49 mator starting from 2012 , https://bugzilla.redhat.com/show_bug.cgi?id=847821
09:49 glusterbot Bug 847821: low, medium, ---, bugs, NEW , After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs
09:49 mator https://bugzilla.redhat.com/show_bug.cgi?id=1199936
09:49 glusterbot Bug 1199936: unspecified, unspecified, 3.6.3, kparthas, MODIFIED , readv on /var/run/6b8f1f2526c6af8a87f1bb611ae5a86f.socket failed when NFS is disabled
09:49 mator https://bugzilla.redhat.com/show_bug.cgi?id=1222065
09:49 glusterbot Bug 1222065: high, high, ---, kparthas, CLOSED CURRENTRELEASE, GlusterD fills the logs when the NFS-server is disabled
09:52 anrao joined #gluster
09:53 ndevos mator: we should have a bug for each different glusterfs version, and a bug should only address one particular issue - so for each set of logs, one bug would be best
09:55 ndevos mator: 1199936 is a 3.6.x bug, and has been fixed in 3.7.0 through 1199944
09:55 mator i do agree, sorry for my last message in 847821
09:56 mator can we set a depency in 847821 on  1199936 and 1199944 ?
09:56 mator and 1222065
09:58 ndevos well, 847821 has been filed against the 'mainline' version, I'm not sure on which bug it should depend?
09:59 ndevos mostly the dependencies are like "bug for a specific version' depends on 'same bug in mainline'
10:00 ndevos that makes it possible for the  maintainers to see if the patch has been merged in mainline, and then a backport can be done
10:09 Pupeno_ joined #gluster
10:10 karnan_ joined #gluster
10:34 ira joined #gluster
10:43 jcastill1 joined #gluster
10:46 atinm joined #gluster
10:49 jcastillo joined #gluster
10:54 kovshenin joined #gluster
10:59 rgustafs joined #gluster
11:04 smohan joined #gluster
11:07 glusterbot News from newglusterbugs: [Bug 1233025] GlusterFS 3.7.3 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1233025>
11:07 glusterbot News from newglusterbugs: [Bug 1214629] RFE: Remove disperse-data option in the EC volume creation command <https://bugzilla.redhat.com/show_bug.cgi?id=1214629>
11:07 glusterbot News from resolvedglusterbugs: [Bug 1227206] GlusterFS 3.7.2 tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1227206>
11:07 kkeithley1 joined #gluster
11:07 hildgrim joined #gluster
11:08 raghug joined #gluster
11:12 mator back from lunch
11:15 mator ndevos, https://bugzilla.redhat.com/show_bug.cgi?id=1222065#c5
11:15 glusterbot Bug 1222065: high, high, ---, kparthas, ASSIGNED , GlusterD fills the logs when the NFS-server is disabled
11:16 mator version info in my comment #5 , going to attach logs ...
11:19 LebedevRI joined #gluster
11:23 rjoseph|afk joined #gluster
11:27 Misuzu joined #gluster
11:30 rafi1 joined #gluster
11:33 rafi joined #gluster
11:33 [Enrico] joined #gluster
11:37 gem_ joined #gluster
11:40 Trefex joined #gluster
11:42 kokopelli joined #gluster
11:44 soumya_ joined #gluster
11:56 unclemarc joined #gluster
11:58 gem_ joined #gluster
11:59 rafi REMINDER: Gluster Community Bug Triage meeting starting in another 1 minutes in #gluster-meeting
11:59 rafi joined #gluster
12:05 firemanxbr joined #gluster
12:09 tanuck joined #gluster
12:09 kotreshhr left #gluster
12:12 rwheeler joined #gluster
12:13 jtux joined #gluster
12:26 itisravi joined #gluster
12:30 txomon|fon joined #gluster
12:32 txomon|fon hi, can anyone help me booting gluster? because it will just not boot
12:32 txomon|fon https://gist.github.com/txomon/58841f2f28654953b512
12:32 txomon|fon I have done a setup for two machines
12:34 txomon|fon and I have no idea on what is the problem
12:34 mator txomon|fon, check with brick logs as well ?
12:37 glusterbot News from newglusterbugs: [Bug 1234891] gf_store_save_value() fflush() error-checking bug, leading to corruption of glusterd.info when filesystem is full <https://bugzilla.redhat.com/show_bug.cgi?id=1234891>
12:37 glusterbot News from resolvedglusterbugs: [Bug 1228785] Cannot add brick without manually setting op-version <https://bugzilla.redhat.com/show_bug.cgi?id=1228785>
12:40 txomon|fon mator, doens't even start
12:40 txomon|fon that's the only written log :(
12:41 anoopcs akay1, We have confirmed the issue you reported earlier is a bug with a distribute-replicate volume. Can you report the same @ Bugzilla?
12:43 Ulrar How do I fix a split brain ?
12:43 Ulrar Can I tell one of the node "congrats, use your data" ?
12:45 merlink joined #gluster
12:47 mribeirodantas joined #gluster
12:48 wkf joined #gluster
12:54 DV joined #gluster
12:56 B21956 joined #gluster
12:57 DV_ joined #gluster
12:57 nsoffer joined #gluster
13:04 jrm16020 joined #gluster
13:06 smohan joined #gluster
13:06 mator Ulrar, depends on a volume type , but fix is "heal" probably
13:07 aaronott joined #gluster
13:07 gildub joined #gluster
13:08 anrao joined #gluster
13:14 Ulrar mator: Yeah no, had to delete the files by hand
13:15 Ulrar really not cool in an emergency situation I have to say
13:17 aaronott joined #gluster
13:19 julim joined #gluster
13:22 pppp joined #gluster
13:23 georgeh-LT2 joined #gluster
13:26 RayTrace_ joined #gluster
13:26 soumya_ joined #gluster
13:29 Trefex joined #gluster
13:32 plarsen joined #gluster
13:37 hamiller joined #gluster
13:37 glusterbot News from newglusterbugs: [Bug 1233333] glusterfs-resource-agents - volume - doesn't stop all processes <https://bugzilla.redhat.com/show_bug.cgi?id=1233333>
13:37 glusterbot News from resolvedglusterbugs: [Bug 1233484] Possible double execution of the state machine for fops that start other subfops <https://bugzilla.redhat.com/show_bug.cgi?id=1233484>
13:37 glusterbot News from resolvedglusterbugs: [Bug 1233282] Possible double execution of the state machine for fops that start other subfops <https://bugzilla.redhat.com/show_bug.cgi?id=1233282>
13:39 mpaul joined #gluster
13:40 mator ndevos, can you please comment ? why 3 people is not enough ? :)  http://www.gluster.org/pipermail/gluster-users/2015-June/022628.html
13:42 theron joined #gluster
13:43 theron_ joined #gluster
13:47 guardianx joined #gluster
13:47 guardianx left #gluster
13:57 cyberswat joined #gluster
13:57 raghug joined #gluster
14:07 glusterbot News from newglusterbugs: [Bug 1237174] Incorrect state created in '/var/lib/nfs/statd' <https://bugzilla.redhat.com/show_bug.cgi?id=1237174>
14:07 glusterbot News from newglusterbugs: [Bug 1092183] GlusterFS and Systemd - start/stop is broken <https://bugzilla.redhat.com/show_bug.cgi?id=1092183>
14:07 shyam joined #gluster
14:17 aravindavk joined #gluster
14:21 curratore joined #gluster
14:23 jcastill1 joined #gluster
14:24 maveric_amitc_ joined #gluster
14:28 jcastillo joined #gluster
14:31 maveric_amitc_ joined #gluster
14:32 mckaymatt joined #gluster
14:34 spalai joined #gluster
14:37 mckaymatt joined #gluster
14:37 glusterbot News from newglusterbugs: [Bug 1182145] mount.glusterfs doesn't support mount --verbose <https://bugzilla.redhat.com/show_bug.cgi?id=1182145>
14:44 bfoster joined #gluster
14:44 ekuric joined #gluster
14:46 curratore Sjors how was the rsync? all fixed? I had to go yesterday and I didn’t say anything until now
14:47 cholcombe joined #gluster
14:48 free_amitc_ joined #gluster
14:51 JustinCl1ft left #gluster
15:00 jmarley joined #gluster
15:04 aaronott joined #gluster
15:05 aaronott joined #gluster
15:05 aaronott joined #gluster
15:09 ws2k3 joined #gluster
15:13 soumya_ joined #gluster
15:13 bennyturns joined #gluster
15:16 squizzi_ joined #gluster
15:17 ws2k3 joined #gluster
15:24 kanagaraj joined #gluster
15:31 jblack joined #gluster
15:35 spalai left #gluster
15:48 vovcia hi :) is there a guide to run native NFS server with gluster? i have gluster 3.7 and centos 7
15:48 vovcia i thought nfs is enabled by default but mount doesnt work :\
15:58 mckaymatt joined #gluster
16:01 anrao joined #gluster
16:02 bturner_ joined #gluster
16:17 ethand320 joined #gluster
16:17 jcastill1 joined #gluster
16:23 jcastillo joined #gluster
16:26 gem_ joined #gluster
16:29 theron joined #gluster
16:30 vmallika joined #gluster
16:31 hchiramm_home joined #gluster
16:33 rwheeler joined #gluster
16:35 cholcombe gluster: how can I add a language binding to this page? http://www.gluster.org/community/documentation/index.php/Language_Bindings
16:35 coredump joined #gluster
16:36 cholcombe semiosis: you probably know this ^ :)
16:39 kkeithley_ No, only because the database is locked and edits won't be saved. It's locked because, IIRC, we're moving to something else.  Once the migration is done you'll be able to add things
16:40 kkeithley_ The administrator who locked it offered this explanation: This wiki is deprecated, please see https://gluster.readthedocs.org/en/latest/
16:52 cholcombe kkeithley_: thanks :)
16:53 cholcombe so read the docs is the way forward now?
16:54 cholcombe wow these docs are extensive! nice work!
16:59 Rapture joined #gluster
17:17 chirino joined #gluster
17:23 neofob joined #gluster
17:27 hagarth joined #gluster
17:46 kovshenin joined #gluster
17:49 kovshenin joined #gluster
17:52 jobewan joined #gluster
17:54 kkk joined #gluster
18:11 firemanxbr joined #gluster
18:12 PeterA joined #gluster
18:12 PeterA anything we need to do on gluster with leap second?
18:14 ndevos I dont think so, but have not though more about it than a few seconds now :)
18:15 PeterA lol i am on ubuntu 12.04 which is kernel 2.6....
18:15 hagarth left #gluster
18:25 calavera joined #gluster
18:34 tanuck joined #gluster
18:35 Slashman joined #gluster
18:59 theron_ joined #gluster
19:10 tanuck joined #gluster
19:17 mckaymatt joined #gluster
19:20 shyam joined #gluster
19:24 Slashman joined #gluster
19:25 calavera joined #gluster
19:32 MrAbaddon joined #gluster
19:34 calavera joined #gluster
19:38 cyberswat joined #gluster
19:41 calavera_ joined #gluster
19:59 theron joined #gluster
20:01 mckaymatt joined #gluster
20:06 tanuck joined #gluster
20:25 arcolife joined #gluster
20:27 cholcombe did the quota enable behavior change in version 3.7?  It seems to be creating a fuse mount now for some reason
20:27 tanuck_ joined #gluster
20:28 RedW joined #gluster
20:34 cholcombe looks like it did the gf_cli_create_aux_mount in 3.6 also.  I wonder why i never noticed that before
20:45 lexi2 joined #gluster
20:55 plarsen joined #gluster
21:20 PatNarciso ... you know its bad when coredump quits.
21:20 cholcombe haha yeah it is
21:20 cholcombe anyone have a quota.conf file i can look at?
21:20 cholcombe i'm curious what it's writing to it
21:21 RayTrace_ joined #gluster
21:25 vovcia hi o/ do You know what is INODELK ?
21:33 jrm16020 joined #gluster
21:39 cholcombe looks like inode delete?
21:39 cholcombe or no delete?
21:40 cholcombe vovcia: https://github.com/gluster/glusterfs/search?utf8=%E2%9C%93&amp;q=INODELK
21:42 vovcia cholcombe: very helpful indeed :P
21:43 cholcombe haha
21:43 mckaymatt Hello Gluster.  I have a question about running Gluster in a  Docker container - Has anyone tried running the Gluster client in a Docker container with the intent of sharing a Gluster volume between the container and the  host?
21:44 vovcia i have some crazy INODELK hangs - workload is unpacking rkt image (container), crazy get/setxattr stufff going on :)
21:44 vovcia like tens seconds
21:44 vovcia mckaymatt: yes is
21:45 mckaymatt I have set up two containers, one running a Gluster server and one running a Gluster client. Gluster seems to be working fine as long as I am trying to push/pull files inside the client container
21:46 mckaymatt the client's volume is shared with the host and I am trying to use volume on the host to no avail
21:47 cholcombe hey matt how's it going?
21:47 mckaymatt Well thanks
21:48 cholcombe i have gluster running in lxc containers without any issue
21:48 mckaymatt Hmmm maybe it's a permissions issue.
21:48 cholcombe are you running priv'd containers?
21:48 mckaymatt ywa
21:48 mckaymatt yes*
21:49 vovcia mckaymatt: clarify: You use mounted gluster volumes on host?
21:49 cholcombe do you have some firewall settings on your docker?  I don't remember exactly how that works
21:49 vovcia mckaymatt: You dont mess with bricks? ;)
21:51 mckaymatt disclaimer, I am new to Gluster. I have the G server sharing a directory and the G client is able to mount that directory
21:52 mckaymatt I don't really know what a brick is. I don't suspect it's a firewall issue because I can share between the server and client, but only if I push/pull the file from within the docker containers using
21:53 bennyturns joined #gluster
21:53 badone joined #gluster
21:53 mckaymatt if I ls  the client dir from the host I see nothing. If I ls the client dir from within the container I see what I would expext.
21:54 vovcia mckaymatt: ok on Your host did U mount gluster?
21:54 vovcia mckaymatt: or u try to edit file directly in filesystem?
21:56 mckaymatt On the host I used Docker's volume share feature to share the dir. I didn't actually mount it on the host.
21:56 mckaymatt I assume you are referring to the /binmount command.
21:56 mckaymatt /bin/mount*
21:57 vovcia mckaymatt: if You want to use gluster volume, You need to mount it with gluster client
21:58 vovcia mckaymatt: You dont mess with gluster directory directly
21:58 vovcia mckaymatt: i hope thats Your issue :))0
21:59 vovcia cholcombe: maybe You have some more ideas about INODELK?
21:59 cholcombe not really.  i don't know much about it
21:59 vovcia cholcombe: i think its something with multi threading extended attributes load...
21:59 mckaymatt okay I think it might but just clarify something for me. Do you mean that on the host (running the containers) I would do something like "sudo mount -t glusterfs <dockerized gluster server ip>/<volume> <wherever I want to mount>
22:00 vovcia mckaymatt: yes
22:01 mckaymatt and it sounds like I should't be making changes to the Gluster server's gluster directory
22:02 mckaymatt okay that helps a lot
22:03 mckaymatt ya that might be part of the problem. So if a gluster server needs access to files in gluster, it would also run the gluster client. That's what I am hearing
22:04 vovcia mckaymatt: yep :)) it's because in glusterfs all work is done by client
22:04 calavera joined #gluster
22:04 vovcia mckaymatt: not 100% true but easy to remember :)
22:07 mckaymatt Tanks for your help :)
22:07 mckaymatt or thanks! Whichever works.
22:08 mckaymatt joined #gluster
22:23 delhage joined #gluster
22:23 eljrax joined #gluster
22:27 calavera joined #gluster
22:43 julim joined #gluster
22:45 Pupeno joined #gluster
22:48 lexi2 joined #gluster
22:49 hagarth joined #gluster
23:00 nangthang joined #gluster
23:01 mribeirodantas joined #gluster
23:03 gildub joined #gluster
23:18 kovshenin joined #gluster
23:29 wkf joined #gluster
23:29 Gill joined #gluster
23:31 badone_ joined #gluster
23:33 sysconfig joined #gluster
23:44 wkf joined #gluster
23:46 Rapture joined #gluster
23:48 coredump joined #gluster
23:50 badone__ joined #gluster
23:53 maveric_amitc_ joined #gluster
23:59 B21956 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary