Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-05-15

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:23 Wizek_ joined #gluster
01:30 masber joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:52 shdeng joined #gluster
02:07 wushudoin joined #gluster
02:16 BlackoutWNCT1 joined #gluster
02:16 BlackoutWNCT1_ joined #gluster
02:17 BlackoutWNCT1_ Hey Guys, I've got an odd one, and am really in need of some assistance with this.
02:18 BlackoutWNCT1_ I've got a replica 3 setup with one machine as the arbiter.
02:18 BlackoutWNCT1_ The arbiter is currently maxing all cores from a glusterfsd process
02:19 BlackoutWNCT1_ It appears as though it's performing a rebalance.
02:19 BlackoutWNCT1_ Only issue is, I haven't scheduled a rebalance and the rebalance info shows some odd stats.
02:20 BlackoutWNCT1_ One of my nodes has like 40k files rebelanced, and the arbiter and other node have 0
02:20 BlackoutWNCT1_ it's also scanned 90k and failed 49, skipped 79
02:21 BlackoutWNCT1_ again, all others are 0
02:22 BlackoutWNCT1_ Actually, rebalance may not be my issue. status there says completed. I think 2 of the glusterfsd processes may be stuck though.
02:22 BlackoutWNCT1_ But I'm not sure how to confirm this.
02:23 BlackoutWNCT1_ one has 233h uptime, and the other has 179h
02:23 BlackoutWNCT1_ Both of these are using a tonne of CPU
02:23 BlackoutWNCT1_ 600% and 350% respectively.
02:27 Jacob8432 joined #gluster
02:45 prasanth joined #gluster
02:59 kramdoss_ joined #gluster
03:06 Gambit15 joined #gluster
03:23 Jacob843 joined #gluster
03:34 Shu6h3ndu joined #gluster
03:42 nbalacha joined #gluster
03:45 atinm joined #gluster
04:03 itisravi joined #gluster
04:13 riyas joined #gluster
04:29 buvanesh_kumar joined #gluster
04:45 hgowtham joined #gluster
04:53 skoduri joined #gluster
05:00 kdhananjay joined #gluster
05:01 glisigno1i joined #gluster
05:02 ankitr joined #gluster
05:04 jiffin joined #gluster
05:05 skumar joined #gluster
05:07 glisignoli joined #gluster
05:12 glisigno1i joined #gluster
05:21 ndarshan joined #gluster
05:22 chawlanikhil24 joined #gluster
05:24 apandey joined #gluster
05:24 aravindavk joined #gluster
05:25 sona joined #gluster
05:43 ankitr joined #gluster
05:51 atinm joined #gluster
05:51 kotreshhr joined #gluster
05:52 Karan joined #gluster
05:54 lalatenduM joined #gluster
05:55 ankitr joined #gluster
05:55 msvbhat joined #gluster
05:57 apandey_ joined #gluster
05:58 derjohn_mob joined #gluster
06:02 itisravi joined #gluster
06:06 sanoj joined #gluster
06:08 ppai joined #gluster
06:09 susant joined #gluster
06:15 chawlanikhil24 joined #gluster
06:21 ankitr joined #gluster
06:22 ayaz joined #gluster
06:29 gyadav joined #gluster
06:31 TBlaar joined #gluster
06:32 jiffin1 joined #gluster
06:32 BitByteNybble110 joined #gluster
06:35 chawlanikhil24 joined #gluster
06:42 Prasad joined #gluster
06:42 om3 joined #gluster
06:47 kotreshhr joined #gluster
06:54 [diablo] joined #gluster
06:55 ivan_rossi joined #gluster
06:56 jtux joined #gluster
06:59 rastar joined #gluster
07:00 bartden joined #gluster
07:00 bartden Hi, does glusterfs support local disk caching on the client? Or can it be used with cachefs for example?
07:08 jkroon joined #gluster
07:13 poornima_ joined #gluster
07:19 jiffin1 joined #gluster
07:22 armyriad joined #gluster
07:22 ivan_rossi left #gluster
07:26 ppai bartden, No, it doesn't do local disk caching. Glusterfs client does in-memory caching though.
07:28 bartden ppai yes but with genetic data (~300GB) it becomes quiet expensive
07:29 ppai bartden, you're right. But AFAICT, it needs support from FUSE
07:30 bartden ok, but using nfs as client to mount the gluster volume, would that work? because NFS has built in support for cachefs
07:33 ppai bartden, I'm not sure. jiffin, do you know ?
07:33 apandey__ joined #gluster
07:34 poornima_ joined #gluster
07:35 mbukatov joined #gluster
07:41 ayaz joined #gluster
07:43 jtux joined #gluster
07:44 kotreshhr joined #gluster
07:48 itisravi_ joined #gluster
07:50 bartden Additional question, when i enable bitrot, will it sign each file that is created during the creation process or will it sign the file afterwards (asynchronous)? I want to use this feature to make sure files are still the same after saving them on gluster
07:50 fsimonce joined #gluster
07:57 jiffin bartden: are talking about gluster nfs server?
07:57 bartden yes
07:57 jiffin i mean inbuild
07:57 jiffin gluster nfs server won't cache any information at its layer by default
07:59 bartden no its on the client that i want to enable caching, using cachefs for example. Mounting an nfs share has the ability to use the fsc option. This will keep data local on a local disk
08:00 serg_k joined #gluster
08:07 atinm joined #gluster
08:07 Peppard joined #gluster
08:14 jiffin bartden: what happens if there are more than one client?
08:15 bartden i don’t see any issues? Why?
08:18 aravindavk joined #gluster
08:21 jiffin bartden: cache only used for read operations?
08:21 bartden yes
08:21 jiffin how/when will u update the cache data
08:22 jiffin bartden: then multiple client is not an issue
08:22 Gugge why would he know, hes just a user af the linux cachefs feature, that promises "it just works"
08:31 bartden :
08:31 bartden :)
08:42 flying joined #gluster
08:44 sanoj joined #gluster
08:53 Seth_Karlo joined #gluster
09:02 social joined #gluster
09:06 sona joined #gluster
09:10 ashka joined #gluster
09:10 chawlanikhil24 joined #gluster
09:10 ashka joined #gluster
09:16 itisravi joined #gluster
09:21 Seth_Kar_ joined #gluster
09:29 gyadav_ joined #gluster
09:39 gem joined #gluster
09:40 legreffier joined #gluster
09:41 MrAbaddon joined #gluster
09:44 [fre] left #gluster
09:49 kotreshhr joined #gluster
09:51 chawlanikhil24 joined #gluster
10:08 Karan joined #gluster
10:11 kshlm joined #gluster
10:12 msvbhat joined #gluster
10:15 ppai bartden, the 'fsc' mount option should theoretically work when accessing glusterfs via NFS (both ganesha or gNFS) but there's an issue. Read more about it here: https://bugzilla.redhat.com/show_bug.cgi?id=1221099
10:15 glusterbot Bug 1221099: medium, unspecified, ---, ndevos, ASSIGNED , gNFSd does not work correctly/consistently with FSCache/CacheFilesd
10:16 derjohn_mob joined #gluster
10:18 rastar joined #gluster
10:19 gyadav__ joined #gluster
10:29 apandey_ joined #gluster
10:31 apandey__ joined #gluster
10:33 ankitr joined #gluster
10:41 gyadav_ joined #gluster
10:43 flying joined #gluster
10:47 atinm joined #gluster
10:47 Teraii joined #gluster
10:48 k0nsl joined #gluster
10:48 k0nsl joined #gluster
11:07 kotreshhr joined #gluster
11:10 gem joined #gluster
11:15 Teraii joined #gluster
11:17 ankitr joined #gluster
11:19 aravindavk joined #gluster
11:19 skumar_ joined #gluster
11:25 chawlanikhil24 joined #gluster
11:30 project0 joined #gluster
11:33 gyadav__ joined #gluster
11:39 cloph too bad that my geo-rep bug re symlinks doesn't seem to be severe enough to get fixed anytime soon :-/ - guess also shows that not many people use geo-replication with "advanced" stuff as symlinks ;-)
11:43 Klas I can understand why symlinks are tricky, though ;)
11:46 cloph I'd not mind if the link-targets on the geo-replication target wouldn't match up/point to the same target, but I do mind that the geo-replication goes into faulty state and doesn't sync at all.. (but instead creates gigabytes of logfiles with errors filling up the disk :-))
11:55 Klas hehe
11:55 Klas yeah, that I can understand
11:56 skumar joined #gluster
11:56 Teraii joined #gluster
12:22 prasanth joined #gluster
12:23 atinm joined #gluster
12:29 bmurt joined #gluster
12:29 chawlanikhil24 joined #gluster
12:33 ppai joined #gluster
12:37 vbellur joined #gluster
12:38 MrAbaddon joined #gluster
12:40 plarsen joined #gluster
12:41 msvbhat joined #gluster
12:43 Seth_Karlo joined #gluster
12:46 baber joined #gluster
12:51 ankitr joined #gluster
13:02 prasanth joined #gluster
13:08 sanoj joined #gluster
13:08 susant left #gluster
13:21 buvanesh_kumar joined #gluster
13:29 kramdoss_ joined #gluster
13:31 msvbhat joined #gluster
13:32 kotreshhr joined #gluster
13:49 shaunm joined #gluster
13:57 kraynor5b__ joined #gluster
14:09 flyingX joined #gluster
14:10 flying joined #gluster
14:17 farhorizon joined #gluster
14:17 glustin joined #gluster
14:19 Wizek_ joined #gluster
14:21 nh2 joined #gluster
14:22 flyingX joined #gluster
14:23 flying joined #gluster
14:24 flyingX joined #gluster
14:26 flying_ joined #gluster
14:39 social joined #gluster
14:40 aravindavk joined #gluster
14:51 kramdoss_ joined #gluster
14:52 wushudoin joined #gluster
14:57 wushudoin joined #gluster
14:58 nbalacha joined #gluster
15:00 project0 joined #gluster
15:01 gyadav__ joined #gluster
15:06 nh2 joined #gluster
15:13 msvbhat joined #gluster
15:18 genial joined #gluster
15:21 baber joined #gluster
15:22 genial Hello, for a replica-3 volume is there a difference between running `gluster replace-brick $volume $brick1 $brick2 commit force` and `gluster add-brick $volume replica 4 $brick2` followed by a `gluster remove-brick $volume replica 3 $brick1` post-heal?
15:34 jiffin joined #gluster
15:42 plarsen joined #gluster
15:47 riyas joined #gluster
15:55 ankitr joined #gluster
16:01 baber joined #gluster
16:07 farhorizon joined #gluster
16:37 farhorizon joined #gluster
16:39 Seth_Kar_ joined #gluster
16:41 farhorizon joined #gluster
16:46 om2_ joined #gluster
16:51 gyadav__ joined #gluster
16:53 AppStore What's the performance difference between CephFS and GlusterFS nowadays?
17:48 [diablo] joined #gluster
17:51 rastar joined #gluster
18:03 scones joined #gluster
18:06 bwerthmann joined #gluster
18:32 baber joined #gluster
18:32 genial left #gluster
18:36 gyadav__ joined #gluster
18:38 ashiq joined #gluster
18:38 Karan joined #gluster
18:43 marlinc joined #gluster
18:46 gyadav__ joined #gluster
19:00 gyadav__ joined #gluster
19:37 rwheeler joined #gluster
19:48 derjohn_mob joined #gluster
19:50 Karan joined #gluster
20:02 farhorizon joined #gluster
20:05 baber joined #gluster
20:25 farhorizon joined #gluster
20:29 JoeJulian AppStore: depends on the posix command. In my own performance tests, Ceph's still about 33% slower than gluster for databases and VM images.
20:33 farhorizon joined #gluster
20:36 shyam joined #gluster
20:42 armyriad joined #gluster
21:17 Vapez joined #gluster
21:17 Vapez joined #gluster
21:33 ashiq joined #gluster
21:40 MrAbaddon joined #gluster
21:42 Teraii joined #gluster
21:43 Teraii_ joined #gluster
22:05 AppStore JoeJulian: When we last looked at the possibility of using GlusterFS or CephFS as a distributed filesystem, we ended up using GlusterFS. This was back in version 3.4 or so. We found that while GlusterFS performed adequately in terms of latency and throughput, we had some issues with stability.
22:06 AppStore Nodes would just randomly stop working, and debugging why usually took hours.
22:07 AppStore Mostly due to a large amount of log data with little indication of the importance of the message you were reading. Has there been any work done to improve this?
22:07 AppStore The debugging part that is.
22:35 baber joined #gluster
23:44 nh2 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary