Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 bjornar joined #gluster
00:15 glusterbot News from newglusterbugs: [Bug 1238476] Throttle background heals in disperse volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1238476>
00:30 vmallika joined #gluster
00:45 an joined #gluster
00:51 mribeirodantas joined #gluster
01:13 RedW joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:51 kdhananjay joined #gluster
02:02 nangthang joined #gluster
02:05 social joined #gluster
02:06 meghanam joined #gluster
02:07 PatNarciso joined #gluster
02:14 maveric_amitc_ joined #gluster
02:18 lyang0 joined #gluster
02:19 jvandewege joined #gluster
02:24 an joined #gluster
02:35 wkf joined #gluster
02:37 aravindavk joined #gluster
02:37 akay1 joined #gluster
02:38 hagarth joined #gluster
02:40 aaronott joined #gluster
02:41 rarylson joined #gluster
02:45 rarylson joined #gluster
03:05 bharata-rao joined #gluster
03:07 harish_ joined #gluster
03:13 nangthang joined #gluster
03:26 vmallika joined #gluster
03:32 atinm joined #gluster
03:42 overclk joined #gluster
03:43 bharata-rao joined #gluster
03:43 TheSeven joined #gluster
03:44 smohan joined #gluster
03:59 soumya joined #gluster
04:01 anmol joined #gluster
04:02 spandit joined #gluster
04:07 raghug joined #gluster
04:17 RameshN joined #gluster
04:20 shubhendu joined #gluster
04:20 meghanam joined #gluster
04:25 gem joined #gluster
04:34 harish joined #gluster
04:37 kanagaraj joined #gluster
04:37 aravindavk joined #gluster
04:39 sakshi joined #gluster
04:45 jiffin joined #gluster
04:55 PatNarciso joined #gluster
04:57 yazhini joined #gluster
05:00 victori joined #gluster
05:03 yosafbridge joined #gluster
05:05 vimal joined #gluster
05:05 anil joined #gluster
05:06 free_amitc_ joined #gluster
05:08 ndarshan joined #gluster
05:11 deepakcs joined #gluster
05:13 rafi1 joined #gluster
05:19 ppai joined #gluster
05:24 dusmant joined #gluster
05:35 ashiq joined #gluster
05:35 Manikandan joined #gluster
05:36 SOLDIERz joined #gluster
05:37 pppp joined #gluster
05:47 atalur joined #gluster
05:52 overclk joined #gluster
05:53 deepakcs joined #gluster
05:53 an_ joined #gluster
05:55 vmallika joined #gluster
06:02 kdhananjay joined #gluster
06:04 an joined #gluster
06:08 ramteid joined #gluster
06:19 kotreshhr joined #gluster
06:20 jtux joined #gluster
06:27 schandra joined #gluster
06:31 aravindavk joined #gluster
06:37 soumya joined #gluster
06:39 aravindavk joined #gluster
06:41 nangthang joined #gluster
06:44 kdhananjay joined #gluster
06:45 smohan joined #gluster
06:52 nbalacha joined #gluster
06:53 dusmant joined #gluster
07:05 an joined #gluster
07:05 anmol joined #gluster
07:05 ramkrsna joined #gluster
07:05 ramkrsna joined #gluster
07:05 rgustafs joined #gluster
07:06 [Enrico] joined #gluster
07:06 andras ndevos: do you remember yesterday I could not mount. Now fixed
07:09 andras ndevos: Actually I found a SPOF in a distributed system. One of the machines were out of memory and could not make any new network connections, at the same time all other nodes believed that the failing node is online. What I did is restart glusterd one-by-on until I found the failing node. Suddenly all worked as before
07:11 corretico joined #gluster
07:12 free_amitc_ joined #gluster
07:13 SpComb livelock
07:19 abyss_ joined #gluster
07:24 abyss joined #gluster
07:28 dusmant joined #gluster
07:29 meghanam joined #gluster
07:29 MrAbaddon joined #gluster
07:31 Saravana_ joined #gluster
07:33 an joined #gluster
07:45 an joined #gluster
07:49 fsimonce joined #gluster
07:52 jcastill1 joined #gluster
07:57 jcastillo joined #gluster
08:01 gem joined #gluster
08:01 ndevos andras: /win 3
08:02 ndevos andras: wow, that sounds like something the glusterd folks should know about
08:03 ndevos andras: well, maybe at least, were there any error messages in the etc-glusterd-..log? those errors maybe should be marked as critical and have the process exit
08:03 andras ndevos: Yes It is considered a vulnerability. The whole gluster can be inaccessible if one node is is out of memory
08:03 andras I saw no crtitical error in any of the log files. The whole gluster believed that all OK
08:04 ndevos andras: no, gluster often tries to handle out of memory issues decently, but certain occasions should be fatal
08:05 ndevos andras: so, I think the actual memory issue might be listed as a warning, and was gracefully recovered from (or tried to)
08:05 andras in my case it was not fatal.
08:05 andras i will check again if I had any warnings about memory
08:06 ndevos andras: do you know which process you restarted? only the glusterd one, or also glusterfsd and glusterfs?
08:08 Slashman joined #gluster
08:09 the-me joined #gluster
08:09 andras service glusterd stop, service glusterfsd stop, + maually killed 2 more glusterfs processes, then service glusterd start   full restart
08:09 andras ndevos: I got these kind of warning in logs : W [socket.c:522:__socket_rwv] 0-management: readv on 10.10.10.242:24007 failed (No data available)
08:10 andras ndevos: not too many although, when I got locked, actually I had the remove-brick rebalance running only  logging to rebalance.log
08:11 ndevos andras: I do not know if those would be sufficient, maybe it needs some more lines before/after
08:12 ndevos andras: I guess the best approach would be to file a bug and explain the behaviour, atinm or one of the other glusterd devs should have a look at it
08:12 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
08:13 andras ndevos: +3 lines http://fpaste.org/238719/
08:15 ndevos andras: hmm, what about the logs on 10.10.10.245 ?
08:17 jcastill1 joined #gluster
08:19 andras ndevos: .245 was just restarted that time, not relevant
08:20 ndevos andras: ah, try to find something in the logs of the system that went OOM, and find the earliest log entries that could relate to it
08:21 sysconfig joined #gluster
08:22 andras ndevos: will check that again...   yesterday I was following-up logs all the time, there was nothing critical, very few info lines in etc-glusterfs-glusterd.vol.log
08:23 jcastillo joined #gluster
08:24 ndevos andras: are you confident that it was the glusterd process that caused the issue, or could it have been one of the brick processes too?
08:25 ndevos andras: /var/log/glusterfs/bricks/*.log on the affected system might have something too
08:25 andras ndevos: earlier in the morning I got these: http://fpaste.org/238726/  on the failing node,
08:26 andras ndevos: will check brick logs
08:26 curratore joined #gluster
08:30 andras ndevos:  sorry, first entry in brick logs are today morning. no earlier logs.
08:30 ndevos andras: I dont think that log is very helpful either, but I'm no glusterd guy :)
08:30 ndevos andras: no log-rotated ones? with a -<date> or .gz ?
08:30 andras ndevos: sorry mixed head and tail commands   :-)
08:31 karnan joined #gluster
08:35 andras ndevos:  no critical errors, just Info , all relates to ongoing rebalance and some file have wrong xattrs.
08:35 lyang0 joined #gluster
08:39 ndarshan joined #gluster
08:39 ndevos andras: hmm, something in /var/log/messages? if the whole system was affected something outside of gluster could have catched it
08:40 ndevos andras: there are rare occasions where the kernel can not allocate memory to do (for example) network IO, and those can hang connections/sockets until the kernel recovers
08:41 ndevos andras: those cases are very difficult to detect from the daemons, it is mostly generic system monitoring that noticed this
08:45 andras ndevos: I assume it was exactly what you said. cannot allocate memory and it just hangs forever.  Will read more logs and come back if I found something
08:45 surabhi_ joined #gluster
08:49 ndevos andras: ok, maybe a good time to setup zabbix/nagios/... monitoring in case you dont have something like that yet
08:49 andras i have nagios and munin
08:51 andras ndevos: must have. i am thinking about some more complex daily health checker. maybe with rundeck or similar
08:52 andras ndevos: thinking about to build one. whenever I have some time....
08:52 ndevos andras: I know there are some zabbix scripts/templates that people use for monitoring different gluster aspects, but I have not tried it out yet
08:54 ndevos andras: that is called "glubix" and should be on github, zabbix also supports triggers and other things for alerting, its quite complete and not too difficult to setup
08:54 ndevos and it comes with graphs, people like images too :D
08:56 ctria joined #gluster
08:57 gem joined #gluster
09:01 Trefex joined #gluster
09:03 rjoseph joined #gluster
09:07 spalai joined #gluster
09:09 Manikandan joined #gluster
09:13 raghu joined #gluster
09:19 deniszh joined #gluster
09:21 surabhi_ joined #gluster
09:27 dusmant joined #gluster
09:35 andras ndevos: will check zabix+glubix. Pretty names heh :-)
09:36 corretico joined #gluster
09:41 gem joined #gluster
09:41 ndevos andras: hehe, yeah, Glubix is my favorite :)
09:45 MrAbaddon joined #gluster
09:46 shubhendu joined #gluster
09:47 RameshN joined #gluster
09:48 curratore left #gluster
09:49 anmol joined #gluster
09:49 ndarshan joined #gluster
09:50 an joined #gluster
09:51 al0 joined #gluster
09:53 rgustafs joined #gluster
09:54 dusmant joined #gluster
09:55 atinm ndevos, andras : it took some time for me to go through yur chat
09:55 atinm *your
09:56 atinm andras, in case of a node going OOM killed the other nodes in the cluster *should not* see it online for long
09:56 atinm andras, however there would be a time window (till the nodes receive rpc disconnect events) they will see it as a online and at that time things might look weird
09:56 ndevos atinm: what if the case is that the OOM just causes a hang? new connections/sockets may not work, old ones could stay (partially) functional
09:58 andras atinm: It was not killing other nodes, just unable to make new connections. I was unable to make mount on any of the nodes
09:58 atinm ndevos, then we are in a trouble :D
09:58 atinm ndevos, that's why I mentioned if its OOM *killed*
09:59 ndevos atinm: I've seen this happen due to problems with network cards as well, not sure how to detect that in our daemons
09:59 ndevos atinm: yeah, a *kill* is handled just fine I guess, its the hangs that can cause problems
10:00 cuqa__ joined #gluster
10:00 atinm ndevos, if there is a problem in the underlying n/w things can go weird, we don't have a control there as of now :(
10:01 ndevos atinm: even if it is not caused by glusterd or the other daemons, it would be awesome if we can notice a hang somehow - but I dont have any idea how we could do that
10:01 ndevos atinm: yes, indeed, and for those things we rely on other monitoring solutions, maybe we should just promote their usage more?
10:02 atinm ndevos, probably implementing heartbeat mechanism might help
10:03 ndevos atinm: maybe, or occasionally disconnect and re-connect the ping-timer connections or something...
10:04 atinm ndevos, but you bring a point now, we have ping timer enabled in upstream
10:04 atinm ndevos, and its 30 secs by default
10:05 kovshenin joined #gluster
10:05 atinm ndevos, and in that case if one of the machine is hung other glusterD should be able to detect it IMO
10:05 atinm andras, which version of gluster are you using?
10:05 ndevos atinm: is that a ping timer between glusterd's?
10:06 ghenry joined #gluster
10:06 atinm ndevos, we do have a ping timer between glusterd's as well IIRC
10:07 andras atinm: 3.5.2
10:07 atinm ndevos, I will be back
10:07 atinm andras, here you go :)
10:07 atinm andras, you wouldn't face this problem 3.6 onwards as we have ping timer implementation there
10:08 andras atinm: glad to hear !  will upgrade when time allows
10:08 ndevos atinm: cool, thanks!
10:09 tanuck joined #gluster
10:10 andras ndevos, atinm: thanks guys for the excellent support!
10:19 an joined #gluster
10:24 ndarshan joined #gluster
10:25 anmol joined #gluster
10:26 dusmant joined #gluster
10:27 shubhendu joined #gluster
10:27 LebedevRI joined #gluster
10:30 jiffin1 joined #gluster
10:36 nsoffer joined #gluster
10:36 meghanam joined #gluster
10:37 aravindavk joined #gluster
10:38 atrius joined #gluster
10:44 Leildin joined #gluster
10:47 shubhendu joined #gluster
10:48 atalur joined #gluster
10:49 nbalacha joined #gluster
10:49 atinm joined #gluster
10:50 RameshN joined #gluster
10:52 ekuric joined #gluster
10:56 atalur joined #gluster
10:59 dusmant joined #gluster
11:03 kotreshhr1 joined #gluster
11:06 kotreshhr joined #gluster
11:09 atinmu joined #gluster
11:15 partner joined #gluster
11:17 glusterbot News from newglusterbugs: [Bug 1238661] When bind-insecure is enabled, bricks may not be able to bind to port assigned by Glusterd <https://bugzilla.redhat.com/show_bug.cgi?id=1238661>
11:25 rafi joined #gluster
11:27 Fidelix joined #gluster
11:28 rafi1 joined #gluster
11:29 rafi joined #gluster
11:29 javi404 joined #gluster
11:32 unclemarc joined #gluster
11:33 lpabon joined #gluster
11:36 meghanam_ joined #gluster
11:36 jiffin joined #gluster
11:43 kkeithley1 joined #gluster
11:43 an joined #gluster
11:44 javi404 joined #gluster
11:46 rafi joined #gluster
11:46 Bhaskarakiran joined #gluster
11:48 spalai left #gluster
11:52 dusmant joined #gluster
11:53 shubhendu joined #gluster
11:58 Manikandan joined #gluster
12:04 vmallika joined #gluster
12:05 perpetualrabbit I tried testing `reconstruction' of a brick in a disperse volume. I have 20 bricks, redundancy 4. I stopped and emptied one brick, but it seems to take far too long before it is repopulated. Also it seems to have stopped. I really need more information on how to handle different failures on a disperse volume (i.e. erasure coding). Also how do I check the health of a disperse volume: what is the state of reconstruction, which
12:05 perpetualrabbit nodes are up and down, how do I add or remove nodes, how do I replace and existing node 'in-place' and more like this.
12:11 spalai joined #gluster
12:12 firemanxbr joined #gluster
12:12 jtux joined #gluster
12:15 kotreshhr joined #gluster
12:19 rjoseph joined #gluster
12:22 julim joined #gluster
12:28 alexandregomes joined #gluster
12:29 alexandregomes Hi, I’m trying to connect to a “degraded” 2-replica with a single server up. The client won’t mount unless both servers are up and running… on the logs it appears it’s trying to connect to the second server but fails.
12:31 [Enrico] joined #gluster
12:33 jiffin joined #gluster
12:34 alexandregomes volume status shows that the volume is not online, how can I force it?
12:35 kotreshhr left #gluster
12:37 B21956 joined #gluster
12:41 jiffin alexandregomes: try: gluster volume start <volname> force
12:42 alexandregomes jiffin: thanks, just found that out. however, the new mnt isn’t getting the existing files
12:43 jiffin alexandregomes: give output of volume status command
12:44 alexandregomes http://pastebin.com/LJPPsSfS
12:44 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:45 bit4man joined #gluster
12:45 jiffin alexandregomes: it seems ur bricks are online
12:45 alexandregomes yes
12:45 alexandregomes it does mount
12:45 alexandregomes but doesn’t heal
12:46 alexandregomes the mount point is new btw, never had gluster mounted on it…
12:46 shyam joined #gluster
12:46 alexandregomes but my single file on the brick isnt getting there
12:46 anti[Enrico] joined #gluster
12:46 jiffin alexandregomes: did u create file directly on the backend?
12:47 alexandregomes no no
12:48 itisravi joined #gluster
12:48 alexandregomes it’s odd, I’ve another client and that one is working
12:48 alexandregomes this new client is running on the same docker container as the server
12:48 alexandregomes but the other client that’s working was up from the very start of me killing the servers
12:49 jrm16020 joined #gluster
12:49 alexandregomes ok… http://fpaste.org/238862/41376143/
12:49 alexandregomes seems like a shell or inode-related problem
12:50 jiffin alexandregomes : this is from a working client right?
12:51 pdrakeweb joined #gluster
12:51 alexandregomes this was the new client on the same container as the server
12:51 wkf joined #gluster
12:51 alexandregomes seems like me doing cd /mnt/client and THEN mounting didnt really get the correct dir
12:52 alexandregomes I’ve retested everything from the start and with the volume start force it works
12:52 alexandregomes thanks for the help :)
12:52 Bhaskarakiran joined #gluster
12:52 jiffin alexandregomes: np
12:52 alexandregomes but I’ve another quick question
12:52 alexandregomes is it possible to force a volume to start and clone a “master”?
12:53 jiffin alexandregomes: i am afraid , i didn't get u
12:54 alexandregomes my idea is that to avoid split-brain situations I just want to start a volume and tell it to replicate from a specific master before going up
12:55 alexandregomes like, if host A and B lose connectivity I want to make sure B mirrors A exactly before going online
12:55 jiffin alexandregomes: i think u need two volumes, in which one is master , another backup
12:56 alexandregomes if it helps, this is for postgres failover where at any point there should be 1 master and multiple slaves
12:56 jiffin there sholud be sync between those two
12:57 jiffin alexandregomes: i hope ,u referring to georeplication feature in glusterfs
12:57 alexandregomes no
12:58 alexandregomes I just want to have 2 servers running postgres. At anytime I’ll have 1 postgres as master running and the slave stoped. If theres a problem the slave comes up adn the old master should clone and sync with the data on the new master
12:59 jiffin alexandregomes:did u mean master and slave are two different volumes or
12:59 alexandregomes in glusterfs terms I believe I just want to prevent split-brain situations by telling a volume which brick has priority
12:59 alexandregomes same volume
12:59 jiffin two different bricks in a volume?
12:59 alexandregomes yes, 2 server/bricks running a single volume
13:00 alexandregomes 2replica obviously
13:00 jiffin alexandregomes: i don't think there is a support for that
13:01 alexandregomes ok…
13:01 alexandregomes thanks man!
13:07 spalai left #gluster
13:10 Trefex joined #gluster
13:12 jcastill1 joined #gluster
13:15 julim joined #gluster
13:17 jcastillo joined #gluster
13:18 jrm16020 joined #gluster
13:18 jrm16020 joined #gluster
13:22 ank joined #gluster
13:25 georgeh-LT2 joined #gluster
13:26 dgandhi joined #gluster
13:29 pppp joined #gluster
13:33 DV joined #gluster
13:38 kovshenin joined #gluster
13:40 jobewan joined #gluster
13:44 jmarley joined #gluster
13:49 ira joined #gluster
13:55 johne_ joined #gluster
13:56 coredump joined #gluster
13:58 corretico joined #gluster
14:00 vmallika joined #gluster
14:02 bennyturns joined #gluster
14:02 sankarshan joined #gluster
14:03 bene2 joined #gluster
14:04 shubhendu joined #gluster
14:05 marbu joined #gluster
14:09 theron joined #gluster
14:18 johne_ joined #gluster
14:21 wushudoin joined #gluster
14:22 johne_ Hi, I can't get geo-replication in 3.6 to work. I've set up passwordless ssh from the master to the slave.
14:23 johne_ When I try gluster volume geo-replication myvol x.x.x.x:myvol create  push-pem I get
14:23 johne_ Unable to fetch slave volume details. Please check the slave cluster and slave volume.
14:29 DV__ joined #gluster
14:32 al joined #gluster
14:37 mbukatov joined #gluster
14:44 jmarley joined #gluster
14:59 nbalacha joined #gluster
14:59 Bhaskarakiran joined #gluster
15:04 al joined #gluster
15:04 AdrianH joined #gluster
15:06 AdrianH Hello everybody, I have a couple questions, first one: Is it possible to mount 2 separate Gluster volumes on the same client? I am having problems with this, my client thinks they are both the same one and I don't understand why (different vol name, different host names ...)
15:07 AdrianH (also If I umount the first one, then mount the second one, it still thinks it is the first one)
15:07 victori joined #gluster
15:25 morse joined #gluster
15:26 johne_ Are you mounting them via the gluster client or nfs ?
15:27 AdrianH gluster
15:28 corretico joined #gluster
15:28 vovcia AdrianH: yes its possible and works fine
15:28 vovcia AdrianH: You must be doing sth wrong :)
15:28 AdrianH I am trying to locate the glusterfs.vol file to see if they are the same
15:31 AdrianH do you know where i can see glusterfs.vol file (on the client or on one of the peers)?
15:33 johne_ How are you doing the mounting on the client ?
15:33 johne_ What command ?
15:34 raghug joined #gluster
15:34 ndevos AdrianH: you can get the .vol file the client uses with: gluster system: getspec $VOLNAME
15:34 AdrianH fstab: Gluster_new1:/new-gluster-volume /var/new_gluster glusterfs defaults,backupvolfile-server=Gluster_new3 0 0
15:35 AdrianH ndevos: thanks
15:37 AdrianH OK I found the problem, the hosts in the glusterfs.vol file are the same
15:37 jbrooks joined #gluster
15:37 AdrianH johne_ & vovcia & ndevos: thanks for your help
15:38 ndevos AdrianH: does that match the output of "gluster volume info" too?
15:41 AdrianH ndevos: the info is correct, I made a mistake by using the same host names
15:43 AdrianH looking at gluster system: getspec $VOLNAME  I can see the hosts are the same, so woth the current setup I can't mount both at the same time
15:43 AdrianH and if I want to mount the second one I have to update the IP i have in my hosts file
15:44 AdrianH that's why it would think it was the first one. I didn't know this: http://www.gluster.org/community/documentation/index.php/Understanding_vol-file
15:51 woakes070048 joined #gluster
15:52 mckaymatt joined #gluster
16:08 rafi joined #gluster
16:08 woakes07004 joined #gluster
16:08 papamoose joined #gluster
16:08 nangthang joined #gluster
16:10 jiffin joined #gluster
16:11 cholcombe joined #gluster
16:12 calavera joined #gluster
16:12 woakes07004 has anyone set up gluster for the backend for ovirt storage. I am just wondering the best use case i have seen some people managing it with puppet and some with ovirt it self. any advice would much appreacted.
16:14 unclemarc joined #gluster
16:19 PatNarciso joined #gluster
16:20 jrm16020 joined #gluster
16:27 jiffin ndevos: kkeithley: i have resent the patch http://review.gluster.org/#/c/11144/
16:35 rafi joined #gluster
16:35 kdhananjay joined #gluster
16:38 ndevos jiffin: ok, thanks!
16:39 Rapture joined #gluster
17:00 theron_ joined #gluster
17:01 vmallika joined #gluster
17:11 PeterA joined #gluster
17:12 raghug joined #gluster
17:18 hagarth joined #gluster
17:24 wushudoin| joined #gluster
17:24 kovshenin joined #gluster
17:29 wushudoin| joined #gluster
17:33 vmallika joined #gluster
17:39 atalur joined #gluster
17:44 vmallika joined #gluster
17:45 shaunm_ joined #gluster
17:50 calavera joined #gluster
17:52 calavera joined #gluster
18:00 daMaestro joined #gluster
18:02 gem joined #gluster
18:04 vmallika joined #gluster
18:08 ninkotech joined #gluster
18:08 ninkotech_ joined #gluster
18:24 jmarley joined #gluster
18:26 rotbeard joined #gluster
18:31 theron joined #gluster
18:45 levlaz left #gluster
18:51 calavera joined #gluster
18:53 hchiramm_home joined #gluster
18:56 PatNarcisoAFK joined #gluster
19:03 Pupeno joined #gluster
19:03 Pupeno joined #gluster
19:22 lexi2 joined #gluster
19:29 cyberswat joined #gluster
19:33 atrius joined #gluster
19:47 calavera joined #gluster
20:13 unclemarc joined #gluster
20:13 cholcombe is 3.7 stable at this point?
20:13 cholcombe i'm seeing some odd behavior in the quota xlator
20:14 corretico joined #gluster
20:15 cholcombe basically the quota xlator seems to be missing writes and not counting them
20:16 cholcombe 3.6 works perfectly
20:17 calavera joined #gluster
20:19 glusterbot News from newglusterbugs: [Bug 1238850] setxattr does not fail with incorrect prefix <https://bugzilla.redhat.com/show_bug.cgi?id=1238850>
20:24 edwardm61 joined #gluster
20:24 cholcombe nevermind i think i see what's wrong.  The quota xlator is writing base64 now instead of hex like it did in 3.6
20:31 shyam joined #gluster
20:44 calavera joined #gluster
20:51 arthurh joined #gluster
21:04 Rapture joined #gluster
21:07 psilvao1 Fast Question: When a proccess remove files in the brick, gluster releases inodes inside the filesystem?
21:08 vovcia 3interesting question
21:20 PatNarcisoAFK joined #gluster
21:25 cholcombe it should
21:25 cholcombe which filesystem?
21:26 psilvao1 in glusterfilesystem
21:26 psilvao1 our experience show us when you remove files inside brick gluster doesn't release inodes
21:27 psilvao1 but we want hear you if you test this condition..
21:28 cholcombe i've never tested that
21:31 arthurh Exploring potential use of a small gluster cluster for testing purposes, quite new at this, atm -- What would be considered a baseline "Optimal" configuration for a 3-node (4x4tb-disks each) distributed-dispersed config?  I'm having trouble wrapping my head around a distributed-dispersed application in this environment (versus, say, a 2/4-node distributed-replicated volume).
21:32 calavera joined #gluster
21:41 calavera joined #gluster
21:54 coredump joined #gluster
21:57 theron joined #gluster
22:09 wkf joined #gluster
22:14 PatNarciso bug; annoyance; deleting a file via mounted volume will cause rebalance to 'fail' if the rebalance is performing a move on the file being deleted.
22:17 mator joined #gluster
22:30 jrm16020 joined #gluster
22:41 fyxim_ joined #gluster
22:44 gildub joined #gluster
22:45 Sjors joined #gluster
22:47 tdasilva joined #gluster
22:49 cogsu joined #gluster
23:04 ndk joined #gluster
23:26 calavera joined #gluster
23:35 calavera joined #gluster
23:52 badone joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary