Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-06-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 cholcombe_ joined #gluster
00:06 Alghost joined #gluster
00:07 snixor joined #gluster
00:08 Alghost joined #gluster
00:22 Alghost joined #gluster
00:22 Alghost_ joined #gluster
00:47 Alghost joined #gluster
00:48 Alghost_ joined #gluster
01:01 gyadav joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:36 thwam joined #gluster
02:37 Igel joined #gluster
02:38 JPaul joined #gluster
02:40 raginbajin joined #gluster
02:41 vinurs joined #gluster
03:50 riyas joined #gluster
03:52 nbalacha joined #gluster
04:09 eMBee n-st: it looks like the chain issue is fixed as the ssl report doesn't show anything. but the issue itself persists, so maybe the cause is actually something else. can you still reproduce it on your machines?
04:11 itisravi joined #gluster
04:24 atinm joined #gluster
04:26 skumar joined #gluster
04:27 susant joined #gluster
04:39 Shu6h3ndu joined #gluster
04:42 buvanesh_kumar joined #gluster
04:42 gyadav joined #gluster
04:42 buvanesh_kumar joined #gluster
04:43 vbellur joined #gluster
04:47 vbellur joined #gluster
04:47 susant joined #gluster
04:48 Saravanakmr joined #gluster
04:49 ppai joined #gluster
05:10 sahina joined #gluster
05:12 vbellur joined #gluster
05:12 vbellur joined #gluster
05:13 vbellur joined #gluster
05:14 vbellur joined #gluster
05:17 karthik_us joined #gluster
05:22 ndarshan joined #gluster
05:33 vbellur joined #gluster
05:35 apandey joined #gluster
05:37 hgowtham joined #gluster
05:41 itisravi_ joined #gluster
05:42 jiffin joined #gluster
05:47 sanoj joined #gluster
05:49 ppai joined #gluster
05:51 skoduri joined #gluster
05:54 Karan joined #gluster
05:56 Humble joined #gluster
05:57 Prasad joined #gluster
06:01 jtux joined #gluster
06:03 skumar_ joined #gluster
06:08 apandey joined #gluster
06:13 sanoj_ joined #gluster
06:13 nbalacha_ joined #gluster
06:13 Saravanakmr_ joined #gluster
06:13 ksandha_ joined #gluster
06:13 gyadav_ joined #gluster
06:13 atinmu joined #gluster
06:13 skumar_ joined #gluster
06:13 Prasad_ joined #gluster
06:13 Shu6h3ndu_ joined #gluster
06:13 vbellur joined #gluster
06:13 susant1 joined #gluster
06:13 jiffin1 joined #gluster
06:14 itisravi_ joined #gluster
06:14 kotreshhr joined #gluster
06:14 sac joined #gluster
06:14 skumar__ joined #gluster
06:15 darshan joined #gluster
06:15 skoduri joined #gluster
06:15 gyadav__ joined #gluster
06:15 Prasad__ joined #gluster
06:16 lalatenduM joined #gluster
06:16 jiffin joined #gluster
06:17 atinm joined #gluster
06:18 buvanesh_kumar_ joined #gluster
06:18 itisravi_ joined #gluster
06:19 karthik_us joined #gluster
06:21 apandey joined #gluster
06:23 Alghost joined #gluster
06:23 sac joined #gluster
06:30 nbalacha_ joined #gluster
06:30 Shu6h3ndu_ joined #gluster
06:30 Saravanakmr_ joined #gluster
06:31 ksandha_ joined #gluster
06:31 kotreshhr joined #gluster
06:31 sanoj_ joined #gluster
06:32 susant joined #gluster
06:32 vbellur joined #gluster
06:34 _KaszpiR_ joined #gluster
06:44 buvanesh_kumar_ joined #gluster
06:47 sona joined #gluster
06:49 jkroon joined #gluster
06:54 [diablo] joined #gluster
07:02 ankitr joined #gluster
07:05 rastar joined #gluster
07:05 buvanesh_kumar joined #gluster
07:12 Wizek_ joined #gluster
07:22 ivan_rossi joined #gluster
07:29 TBlaar joined #gluster
07:31 skoduri joined #gluster
07:35 Klas joined #gluster
07:35 ashiq joined #gluster
07:36 TBlaar2 joined #gluster
07:36 itisravi joined #gluster
07:46 humblec joined #gluster
07:49 TBlaar joined #gluster
07:49 karthik_us joined #gluster
07:50 TBlaar3 joined #gluster
07:52 TBlaar2 joined #gluster
07:58 Humble joined #gluster
08:07 mbukatov joined #gluster
08:44 skumar_ joined #gluster
08:58 buvanesh_kumar joined #gluster
08:59 buvanesh_kumar joined #gluster
08:59 kdhananjay joined #gluster
09:04 jiffin joined #gluster
09:07 buvanesh_kumar_ joined #gluster
09:24 panina joined #gluster
09:25 skoduri joined #gluster
09:33 jiffin1 joined #gluster
09:38 nbalacha_ kdhananjay, ping
09:38 glusterbot nbalacha_: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
09:39 kdhananjay nbalacha_: hi, tell me
09:39 nbalacha_ kdhananjay, does shard using xattrops?
09:39 nbalacha_ kdhananjay, sorry - did not check the code before asking
09:39 kdhananjay nbalacha_: yes it does.
09:40 nbalacha_ kdhananjay, ok. I am adding you as a reviewer for https://review.gluster.org/#/c/17630/
09:40 glusterbot Title: Gerrit Code Review (at review.gluster.org)
09:40 nbalacha_ kdhananjay, this is file migration scenario so would like your input
09:41 kdhananjay nbalacha_: okay, will take a look
09:41 nbalacha_ kdhananjay, thanks
09:49 atinm joined #gluster
09:55 zcourts joined #gluster
10:02 nbalacha_ hi all. If you have any feedback on the gluster documentation and what needs to be better, please drop an email to Gluster-users@gluster.org
10:03 vbellur nbalacha_: the most oft heard complaint by me is the lack of search capabilities with RTD
10:10 Alghost joined #gluster
10:12 nbalacha_ vbellur, thanks
10:12 zcourts joined #gluster
10:16 [diablo] joined #gluster
10:19 misc my biggest gripe is that it seems to lack coherency
10:20 nbalacha_ misc, we are working on that. Are there any specific topics/areas you would like to see improved?
10:21 misc nbalacha_: well, I am not sure how to explain what I find weird :/
10:22 misc but when I look, it mostly look like there is a lots of various documentation stitched together :/
10:23 shyam joined #gluster
10:23 misc (and I also do not want to be overly critical, because documentation is hard and I do not want to make anyone feel bad, because i know people have done work to write documentation even if that's not perfect)
10:23 skoduri joined #gluster
10:24 nbalacha_ misc, no worries - yes, docmentation is is hard but i think everyone agrees that the current docs need improvement
10:24 misc but for example, the administrator guide
10:24 nbalacha_ misc, so any and all feedback on what would help users is welcome
10:24 misc it is a mix of specific tasks, such as "Managing Directory Quotas" / "Monitoring Workload"
10:25 misc and non tasks topic, such as "Puppet Gluster" "Mandatory Locks"
10:25 nbalacha_ misc, agreed.
10:25 misc the menu on the left do not match the menu on index, for another example
10:26 misc the terminology: https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Terminologies/
10:26 glusterbot Title: Terminologies - Gluster Docs (at gluster.readthedocs.io)
10:26 misc they are not sorted alphabetically
10:26 nbalacha_ misc, noted
10:27 nbalacha_ misc, I am sorry but I need to log off now. Can you please send an email with your feedback?
10:27 misc nbalacha_: I am not sure i can send a email with 50m of rant :p
10:27 misc but sure
10:27 nbalacha_ misc,  :)
10:27 misc I would recommend in fact do some usability testing
10:28 misc take a bunch of intern, say "here is the doc, you job is to ddeploy that"
10:28 misc nigelb: we can use your intern for that /o\
10:31 panina joined #gluster
10:42 Humble joined #gluster
10:50 kotreshhr joined #gluster
10:50 susant joined #gluster
11:06 Alghost_ joined #gluster
11:08 susant joined #gluster
11:11 nh2 joined #gluster
11:15 sanoj__ joined #gluster
11:15 bsivasub__ joined #gluster
11:15 rastar_ joined #gluster
11:15 itisravi_ joined #gluster
11:15 lalatenduM joined #gluster
11:15 skumar__ joined #gluster
11:16 gyadav joined #gluster
11:16 ndarshan joined #gluster
11:16 Prasad joined #gluster
11:16 susant1 joined #gluster
11:16 Karan joined #gluster
11:16 Saravanakmr joined #gluster
11:17 karthik_ joined #gluster
11:17 Shu6h3ndu__ joined #gluster
11:17 skoduri joined #gluster
11:17 shruti` joined #gluster
11:19 sona joined #gluster
11:24 kotreshhr joined #gluster
11:26 jiffin1 joined #gluster
11:30 aravindavk joined #gluster
11:37 atinm joined #gluster
11:38 zcourts joined #gluster
11:41 apandey joined #gluster
11:41 skoduri joined #gluster
11:43 kotreshhr left #gluster
11:51 susant joined #gluster
11:51 shyam joined #gluster
12:01 baber joined #gluster
12:05 Klas for some reason, our arbiter seems to need more than the normal servers, what could be the reason for this?
12:07 gyadav joined #gluster
12:07 vbellur joined #gluster
12:11 vbellur joined #gluster
12:12 vbellur joined #gluster
12:12 vbellur joined #gluster
12:14 itisravi_ Klas: more what?
12:14 Klas haha, I suck, sorry
12:14 Klas more memory
12:15 Klas all three servers are allotted 10 GB, both normal servers are fine with that while the arbiter goes quite close to the limit when under high traffic
12:15 Klas never crashed or anything, just a bit curious
12:15 vbellur1 joined #gluster
12:15 itisravi_ Klas: does it fall back to normal?
12:16 Klas yeah, no sign of actual leakage
12:16 itisravi_ mhmm.
12:16 Klas it's mostly a case of us monitoring whenever systems use swap, and that is the only gluster-node where that happens regurarly
12:17 itisravi_ Are there any self-heals happening when you observe this?
12:18 itisravi_ btw is there only one arbiter brick on that node?
12:19 Klas it's 2+1, and all servers are dedicated with exactly the same volumes
12:19 Klas how do I check for self-heals?
12:20 nbalacha joined #gluster
12:20 _KaszpiR_ joined #gluster
12:20 itisravi_ you would need to see glustershd.log on the 3 nodes and the client logs for messages like "Completed x self-heal
12:20 itisravi_ "
12:20 itisravi_ where x is data or metadata or entry.
12:23 nbalacha Does anyone else have any comments on the documents?
12:23 nbalacha not sure if my last msg got through
12:24 Klas itisravi_: nothing about it on arbiter at least
12:24 Klas latest heal was about 3 months ago there
12:26 Klas still waiting for grep results on the other two
12:26 itisravi_ nbalacha: There are many feature pages in the old website that would be nice to have in readthedocs as well. For example, see the discussion on https://github.com/gluster/glusterdocs/pull/236 where I reference a BZ.
12:26 glusterbot Title: Update the link to server quorum by itisravi · Pull Request #236 · gluster/glusterdocs · GitHub (at github.com)
12:26 itisravi_ Klas: ok
12:27 Klas nbalacha: I suck at documentational structures, that is the only reason I'm quiet about it =P
12:27 nbalacha Klas, how abt feedback on experience?
12:27 nbalacha itisravi_, ack
12:27 nbalacha Klas, experience using the docs that is
12:28 Klas nbalacha: generally confusing and very varied usage of terms
12:29 nbalacha Klas, ok
12:29 Klas generally, it's very noticeable that it has been done over several years by loads of different people
12:30 Klas itisravi_: nope, nothing since that same date their either
12:31 itisravi_ Klas: mhmm not sure then :(
12:31 Klas thanks anyway!
12:31 Klas it's not really critical, just peculiar
12:31 itisravi_ what is the workload?
12:32 Klas very light most of the time, just user files of a few different systems
12:32 Klas it will increase a bit soon, but it will be quite light
12:32 itisravi_ okay, I was wondering if you can simulate some kind of a reproducer
12:32 Klas currently I am importing a couple of million files though
12:33 Klas currently I'm rsyncing 2 million files taking up about a total of 2 TB
12:33 Klas or, resyncing, really
12:33 Klas that triggers it
12:33 itisravi_ ok
12:33 Klas we are migrating that system from AFS ;)
12:34 itisravi_ so what do you montior for the memory consumption? RSS values?
12:34 itisravi_ ok
12:34 Klas just normal NRPE mem checks
12:35 * itisravi_ looks up NRPE
12:35 Klas ah, sorry
12:35 Klas nagios local checks
12:35 Klas it just checks /proc values
12:35 Klas basically same as free
12:35 itisravi_ ah ok
12:36 Klas hmm, interesting, memory usage has gone down but swap is full =P
12:36 Klas no issue there, and we have a script which will handle that later on
12:37 Klas just always fun to see full swap and loads of actual memory left
12:37 Klas (and, yes, we have decreased swappiness)
12:38 karthik_ joined #gluster
12:39 skoduri joined #gluster
12:51 nbalacha kdhananjay, hi, got a  minute?
12:51 kdhananjay nbalacha: yes, tell me
12:52 nbalacha kdhananjay, currently dht_fxattrop  does not handle migrated files (no phase checks)
12:52 nbalacha kdhananjay, can that cause issues with shard?
12:53 Jacob8432 joined #gluster
12:53 nbalacha kdhananjay, from a quick code walkthrough it looks like it might
12:56 kdhananjay nbalacha: yeah, it can
12:56 nbalacha kdhananjay, thanks. I'll send a patch
12:57 kdhananjay nbalacha: we also need atomicity between reading xattrs from src and replaying them on dst copy through xattrop by rebalance, so that an xattrop by the client in the middle on the src file doesnt get lost, assuming that can happen
12:58 nbalacha kdhananjay, rebalance  does not use xattrop
12:58 rwheeler joined #gluster
12:59 kdhananjay nbalacha: ah it uses getxattr and setxattr?
12:59 nbalacha yes
13:00 kdhananjay nbalacha: hmm then that should be atomic (but thats only one of the things that lack atomicity during migration)
13:02 nbalacha kdhananjay, let me see if I have understood this
13:03 nbalacha kdhananjay, so an xattrop coming on the linkto file in the phase 2 of a migration will not be copied over to the dst file
13:03 nbalacha kdhananjay, is that right?
13:04 kdhananjay nbalacha: linkto here is the dst file?
13:04 nbalacha src after it has been copied
13:05 kdhananjay nbalacha: hmm so once the src has been copied and becomes a linkto file, there WILL be a getxattr followed by setxattr, im guessing. is that correct?
13:05 nbalacha that is done before it becomes a linkto
13:09 kdhananjay nbalacha: so my concern was this: assume the xattr value of the src file is X. Rebalance reads X through getxattr. Now the xattr is immediately increased by shard from the fuse mount to Y. Even if we assume this xattrop (Y) will be sent on both src and dst, rebalance might overwrite 'Y' with X through the inflight setxattr.
13:10 nbalacha kdhananjay, that would be another issue
13:12 kdhananjay nbalacha: Right
13:13 rwheeler joined #gluster
13:29 ppai joined #gluster
13:34 farhorizon joined #gluster
13:35 plarsen joined #gluster
13:38 skylar joined #gluster
13:43 shyam joined #gluster
13:48 WebertRLZ joined #gluster
13:53 farhorizon joined #gluster
14:09 Shu6h3ndu__ joined #gluster
14:14 plarsen joined #gluster
14:22 Shu6h3ndu joined #gluster
14:28 jstrunk joined #gluster
14:28 BitByteNybble110 joined #gluster
14:28 tom[] joined #gluster
14:29 BitByteNybble110 joined #gluster
14:37 tom[] joined #gluster
14:47 farhorizon joined #gluster
14:50 kpease joined #gluster
14:54 farhorizon joined #gluster
15:07 wushudoin joined #gluster
15:10 baber joined #gluster
15:12 rwheeler joined #gluster
15:17 riyas joined #gluster
15:17 aravindavk joined #gluster
15:17 atinm joined #gluster
15:19 shyam joined #gluster
15:29 farhoriz_ joined #gluster
16:02 shyam joined #gluster
16:17 ivan_rossi left #gluster
16:19 jiffin joined #gluster
16:22 baber joined #gluster
16:32 atinm joined #gluster
16:42 vbellur joined #gluster
16:44 shaunm joined #gluster
16:52 gnulnx If I have a replicate volume, and I want to back this data up off site, is it OK to sync from the brick directory instead of a mount point?
16:52 gnulnx Should be the same data as the brick contains a copy?
17:00 jkroon joined #gluster
17:06 Shu6h3ndu joined #gluster
17:13 farhorizon joined #gluster
17:16 shyam joined #gluster
17:23 baber joined #gluster
17:31 skoduri joined #gluster
17:44 rafi1 joined #gluster
17:48 WebertRLZ joined #gluster
17:48 gospod2 joined #gluster
18:00 JoeJulian gnulnx: if you maintain hardlinks and you don't have any distribute subvolumes (only replicate) then you should be fine.
18:12 shyam joined #gluster
18:14 jiffin joined #gluster
18:16 wushudoin joined #gluster
18:17 baber joined #gluster
18:24 rastar_ joined #gluster
18:53 jkroon joined #gluster
18:59 major each pool has its own GUID right?
19:36 baber joined #gluster
19:36 JoeJulian volume, yes, has it's own uuid.
19:37 major wondering about the possibility of having a metadata-only volume which tracks the filenames and such, but has an extended attribute which points to the volume that contains the whole of the file
19:37 major and be able to point at N volumes
19:38 major initial writes go to the top-tier volume, and migrate data to backend volumes based on arbitrary rules
19:40 JoeJulian Sounds like a variation on how dht already works.
19:40 JoeJulian See the dht.linkto xattr.
19:41 major yah .. I was thinking of abusing that
19:41 major erm .. "using" ;)
19:41 JoeJulian :D
19:41 JoeJulian If you could make a predictive way of doing the lookup, that would be better.
19:42 major just a smarter version of Hot Tiering w/ N backends and redirecting to the backend via the dht.linkto
19:42 major yah .. no real predictanle wayy to do it .. want every file to live in any number of backend volumes based on arbitrary criteria .. possibly set down at a per file/directory level
19:43 major or based on metadata about the file, such as "put really big damn files over here"
19:46 zcourts joined #gluster
20:56 major can you set auth.allow on a per-directory basis?
20:57 farhorizon joined #gluster
21:01 JoeJulian no
21:02 major hurm..
21:03 major gonna have to do it as an xattr thrn
21:45 panina joined #gluster
21:53 ingard joined #gluster
22:01 john51 joined #gluster
22:06 john51 joined #gluster
22:21 farhoriz_ joined #gluster
22:40 zcourts_ joined #gluster
22:42 masber joined #gluster
22:44 Alghost joined #gluster
22:46 Alghost_ joined #gluster
22:48 Alghost joined #gluster
22:52 zcourts joined #gluster
23:18 hvisage joined #gluster
23:36 al joined #gluster
23:54 Alghost joined #gluster
23:54 Alghost joined #gluster
23:58 vbellur joined #gluster
23:59 al joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary