Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-04-12

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:12 dlambrig_ joined #gluster
00:54 baojg joined #gluster
01:03 beeradb joined #gluster
01:23 shyam joined #gluster
01:29 auzty joined #gluster
01:31 EinstCrazy joined #gluster
01:38 dlambrig_ joined #gluster
01:41 beeradb joined #gluster
01:49 luizcpg joined #gluster
01:51 chirino_m joined #gluster
02:03 bennyturns joined #gluster
02:03 rafi joined #gluster
02:04 muneerse joined #gluster
02:09 baojg joined #gluster
02:15 julim joined #gluster
02:17 mmckeen joined #gluster
02:17 Lee1092 joined #gluster
02:18 Telsin joined #gluster
02:23 alghost joined #gluster
02:24 kbyrne joined #gluster
02:27 Lee1092 joined #gluster
02:27 pocketprotector joined #gluster
02:27 DV joined #gluster
02:30 DV joined #gluster
02:33 vmallika joined #gluster
02:33 rastar joined #gluster
02:40 kshlm joined #gluster
02:46 nishanth joined #gluster
03:02 bennyturns joined #gluster
03:04 ramteid joined #gluster
03:19 luizcpg joined #gluster
03:25 nathwill joined #gluster
03:25 overclk joined #gluster
03:26 chirino joined #gluster
03:32 atinm joined #gluster
03:39 chirino joined #gluster
03:54 shubhendu joined #gluster
03:55 shubhendu joined #gluster
04:02 Gnomethrower joined #gluster
04:04 nbalacha joined #gluster
04:07 poornimag joined #gluster
04:09 sakshi joined #gluster
04:14 gem joined #gluster
04:24 itisravi joined #gluster
04:38 rouven joined #gluster
04:39 vmallika joined #gluster
04:40 kdhananjay joined #gluster
04:44 gowtham joined #gluster
04:48 jiffin joined #gluster
04:49 Gaurav_ joined #gluster
04:52 harish joined #gluster
04:59 ndarshan joined #gluster
04:59 kshlm joined #gluster
05:05 nathwill joined #gluster
05:05 EinstCrazy joined #gluster
05:06 cliluw joined #gluster
05:11 karnan joined #gluster
05:12 aspandey joined #gluster
05:13 ppai joined #gluster
05:14 Apeksha joined #gluster
05:14 aravindavk joined #gluster
05:15 ashiq_ joined #gluster
05:17 EinstCra_ joined #gluster
05:17 prasanth joined #gluster
05:20 Manikandan joined #gluster
05:24 karthik___ joined #gluster
05:30 Vaelatern joined #gluster
05:31 Bhaskarakiran joined #gluster
05:32 rastar joined #gluster
05:36 nishanth joined #gluster
05:39 decay joined #gluster
05:46 baojg joined #gluster
05:47 beeradb joined #gluster
05:52 hgowtham joined #gluster
05:54 natarej_ joined #gluster
05:59 haomaiwa_ joined #gluster
06:09 mhulsman joined #gluster
06:10 pur joined #gluster
06:14 kovshenin joined #gluster
06:15 rafi joined #gluster
06:18 spalai joined #gluster
06:21 hchiramm joined #gluster
06:31 skoduri joined #gluster
06:32 atalur joined #gluster
06:34 jtux joined #gluster
06:35 anil joined #gluster
06:37 hackman joined #gluster
06:47 DV__ joined #gluster
06:52 vmallika joined #gluster
06:55 aspandey joined #gluster
06:57 ramky joined #gluster
07:00 DV joined #gluster
07:07 [Enrico] joined #gluster
07:08 jri joined #gluster
07:11 [diablo] joined #gluster
07:25 Saravanakmr joined #gluster
07:33 fsimonce joined #gluster
07:34 anti[Enrico] joined #gluster
07:38 ivan_rossi joined #gluster
07:49 Marbug joined #gluster
07:57 ahino joined #gluster
07:59 Wizek joined #gluster
08:03 ctria joined #gluster
08:05 ahino1 joined #gluster
08:08 Slashman joined #gluster
08:12 pur joined #gluster
08:12 mkzero joined #gluster
08:22 baojg joined #gluster
08:23 atalur joined #gluster
08:25 itisravi joined #gluster
08:28 Wizek joined #gluster
08:43 om joined #gluster
08:46 sakshi joined #gluster
08:49 DV joined #gluster
08:53 DV joined #gluster
09:01 beeradb joined #gluster
09:03 dlambrig_ joined #gluster
09:05 julim joined #gluster
09:06 JesperA joined #gluster
09:12 itisravi joined #gluster
09:19 ahino joined #gluster
09:19 poornimag joined #gluster
09:26 spalai1 joined #gluster
09:27 ndarshan joined #gluster
09:42 haomaiwa_ joined #gluster
09:47 prasanth joined #gluster
09:48 DV joined #gluster
09:53 Raide joined #gluster
10:02 mowntan joined #gluster
10:02 mowntan joined #gluster
10:06 kotreshhr joined #gluster
10:12 rouven joined #gluster
10:14 muneerse2 joined #gluster
10:17 kshlm joined #gluster
10:18 Wizek_ joined #gluster
10:26 gem joined #gluster
10:27 kkeithley1 joined #gluster
10:31 arcolife joined #gluster
10:49 caitnop joined #gluster
11:10 hackman joined #gluster
11:13 johnmilton joined #gluster
11:23 prasanth joined #gluster
11:28 poornimag joined #gluster
11:31 Hesulan joined #gluster
11:44 shyam joined #gluster
11:45 ashiq_ joined #gluster
11:50 bluenemo joined #gluster
12:03 phycho joined #gluster
12:03 phycho hello :-)
12:10 rafi REMINDER: Gluster Community Bug Triage meeting to start now
12:11 phycho can someone tell me how to get data locality to work? or should it work by default?
12:11 phycho I have a mount that contains 500mb of small files; takes forever to read or write
12:11 phycho its mirrored across a WAN
12:11 phycho so added latency of (+20ms) wan link
12:13 jiffin phycho: sorry I am no clear, what did u meant by "data locality"?
12:13 phycho jiffin: if I read file back (which is mirrored on all the nodes); the read comes from the local machine I am on
12:13 jiffin s/no/ not
12:13 phycho rather than a remote brick
12:14 _Bryan_ joined #gluster
12:15 JoeJulian If the server is a client, the read does, indeed, come from the local brick. The health checks that are part of the lookup operation, however, *must* consult all the bricks in the replica. With a bunch of small files, this is where most of the overhead comes from.
12:15 jiffin phycho: you are using fuse mount, there is an option to read from a specific brick on a volume
12:15 phycho jiffin: thats right
12:15 phycho jiffin: thanks will read around that area
12:16 JoeJulian Still going to do self-heal checks.
12:16 JoeJulian s/self-heal/health/
12:16 glusterbot What JoeJulian meant to say was: Still going to do health checks.
12:16 Manikandan joined #gluster
12:17 phycho so across a WAN I probably want geo-replication instead; right?
12:17 phycho for low-latency access
12:18 JoeJulian Maybe. Probably. geo-rep is unidirectional.
12:18 phycho ah. bi-directional is probably what I need
12:18 JoeJulian If you can write to one path, but read from another, you could have high latency writes, but low latency reads.
12:18 phycho as this is an active/active storage mount across two datcenters
12:18 phycho yeah
12:19 vmallika joined #gluster
12:19 phycho perhaps we are doing it wrong though; an mirror in the local dc using geo-replication to the remote dc with active/passive would work just as well
12:19 phycho right now I have 4 bricks in a 'raid1' style configuration across two DCs
12:20 phycho (replica: 4)
12:21 B21956 joined #gluster
12:22 JoeJulian @lucky dos and donts of gluster replication
12:22 glusterbot JoeJulian: https://joejulian.name/blog/glusterfs-replication-dos-and-donts/
12:25 ira joined #gluster
12:26 JoeJulian If anybody here is a gluster expert that's looking to shop for a new job, send me a PM.
12:29 post-factum JoeJulian: RH?
12:30 ndevos JoeJulian: is there a certain level of 'gluster expertise' that you are looking for?
12:32 JoeJulian Someone I can train up to be another me.
12:33 JoeJulian And no, post-factum, not Red Hat.
12:34 plarsen joined #gluster
12:40 pur_ joined #gluster
12:43 luizcpg joined #gluster
12:43 wnlx joined #gluster
12:46 post-factum JoeJulian: so you are tired and retire with pension?
12:47 ndevos post-factum: are you saying he is old? ;-)
12:48 pranithk joined #gluster
12:48 pranithk JoeJulian: what happened?
12:48 JoeJulian I just need a spare me. Nothing bad.
12:49 luizcpg joined #gluster
12:49 JoeJulian We have budget.
12:49 post-factum ndevos: when you say "old" you mean "experienced"
12:49 JoeJulian I think he really meant old.
12:49 pranithk JoeJulian: phew, good
12:50 JoeJulian He's ready to kick me out to pasture.
12:52 Romeor wazap
12:52 ndevos haha, no, I dont want you to retire, JoeJulian
12:53 Romeor So i decided to start without disperse volume, as it will bring more paint with adding bricks /me thinks so and there are no stories about recovery
12:54 unclemarc joined #gluster
12:54 Romeor so just one brick with RAID6 and almost ready another one
12:55 ndevos JoeJulian: we can promote it a little more, maybe some is reading along... It would involve Gluster and maybe OpenStack too?
12:56 ndevos and well, obviously chatting on IRC during working hours :)
12:56 mpietersen joined #gluster
12:57 JoeJulian Gluster and Salt. OpenStack is a plus, but not a requirement.
12:57 JoeJulian Though, really, any config management tool is probably good enough. Salt's easy to learn.
12:58 JoeJulian I'll be writing up the listing today and getting it approved over the next couple of days. Should be in time for Vault.
13:01 d0nn1e joined #gluster
13:01 ndevos JoeJulian: sounds cool, maybe check with amye and see if she would be willing to post it on the blog or planet.gluster.org
13:04 kotreshhr left #gluster
13:09 EinstCrazy joined #gluster
13:09 Norky joined #gluster
13:14 okajimmy joined #gluster
13:15 okajimmy Hello everyone, am new in the group , and i need some help on gluster service
13:17 okajimmy i had gluster working just fine between two servers on centos 6.7 , but i had to format a partition from xfs to ext4, and when i did so the gluster just crashed, and the service wouldn't start any more
13:17 arcolife joined #gluster
13:17 post-factum just a side note: ceph is going to deprecate ext4 support
13:17 post-factum that is an unexpected movement
13:18 ramky joined #gluster
13:18 lalatenduM joined #gluster
13:18 skylar joined #gluster
13:19 mowntan joined #gluster
13:19 mowntan joined #gluster
13:19 mowntan joined #gluster
13:19 okajimmy but i need EXT4 for acl purpose, i need to set more that 50 acl per directory in the concerned "gluster brick"
13:20 post-factum what is the issue with xfs?
13:21 okajimmy xfs can't handle more that 22 acl, i was surprised.
13:21 ahino joined #gluster
13:23 post-factum okajimmy: wow yay no. not exactly
13:24 post-factum okajimmy: do you have v4 or v5 superblock?
13:27 rwheeler joined #gluster
13:28 jiffin joined #gluster
13:29 okajimmy how can i know the version of superblock?
13:29 post-factum i believe, it is v4, so yes, you have 25 acls limit
13:29 post-factum but with recent kernels you may use v5, and have higher limit on acls
13:32 okajimmy but when i use ext4 filesystem it seems to work fine :/
13:32 okajimmy uname -r = 2.6.32-573.el6.x86_64
13:32 hchiramm joined #gluster
13:35 JoeJulian To see why glusterd isn't starting, look in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log when you can't see any reason in there, try just running glusterd in the foreground, "glusterd --debug"
13:36 bennyturns joined #gluster
13:38 Apeksha joined #gluster
13:38 post-factum okajimmy: yep, for 2.6.32 xfsv5 is not an option
13:38 post-factum okajimmy: ext4 has higher limit on acls, smth approx 120 or so
13:39 okajimmy so what am i supposed to do then :/
13:42 EinstCrazy joined #gluster
13:42 JoeJulian ext4 should work. Did you read my suggestion 3 lines up?
13:42 post-factum okajimmy: ok, so check gluster logs plz
13:44 okajimmy yeah yeah there is the log:
13:44 okajimmy [2016-04-12 13:41:56.634626] D [MSGID: 0] [glusterd-store.c:2436:glusterd_store_retrieve_bricks] 0-management: Returning with -1
13:44 okajimmy [2016-04-12 13:41:56.634645] D [MSGID: 0] [glusterd-utils.c:896:glusterd_volume_brickinfos_delete] 0-management: Returning 0
13:44 okajimmy [2016-04-12 13:41:56.634658] D [MSGID: 0] [store.c:461:gf_store_handle_destroy] 0-: Returning 0
13:44 okajimmy [2016-04-12 13:41:56.634669] D [MSGID: 0] [glusterd-utils.c:940:glusterd_volinfo_delete] 0-management: Returning 0
13:44 okajimmy [2016-04-12 13:41:56.634678] E [MSGID: 106201] [glusterd-store.c:3078:glusterd_store_retrieve_volumes] 0-management: Unable to restore volume: gvol2
13:44 okajimmy [2016-04-12 13:41:56.634692] D [MSGID: 0] [glusterd-store.c:3103:glusterd_store_retrieve_volumes] 0-management: Returning with -1
13:44 okajimmy [2016-04-12 13:41:56.634702] D [MSGID: 0] [glusterd-store.c:4373:glusterd_restore] 0-management: Returning -1
13:44 okajimmy [2016-04-12 13:41:56.634729] E [MSGID: 101019] [xlator.c:433:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
13:44 okajimmy [2016-04-12 13:41:56.634741] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed
13:44 okajimmy [2016-04-12 13:41:56.634749] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
13:44 okajimmy [2016-04-12 13:41:56.634965] D [logging.c:1766:gf_log_flush_extra_msgs] 0-logging-infra: Log buffer size reduced. About to flush 5 extra log messages
13:44 okajimmy [2016-04-12 13:41:56.634981] D [logging.c:1769:gf_log_flush_extra_msgs] 0-logging-infra: Just flushed 5 extra log messages
13:44 JoeJulian @paste
13:44 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
13:46 skoduri joined #gluster
13:46 post-factum okajimmy: so you reformatted brick without removing it first?
13:47 Romeor guess so :D
13:47 okajimmy yeah,actually i forgot to remove it first, i just unmount it (i know am dump)
13:48 nbalacha joined #gluster
13:49 post-factum nice
13:50 JoeJulian What version of gluster is this?
13:51 okajimmy glusterfs-server-3.7.10-1.el6.x86_64
13:58 aravindavk joined #gluster
13:59 ira joined #gluster
14:00 Wizek__ joined #gluster
14:01 okajimmy is there any issue for that problem ?
14:02 post-factum i feel the necessity to edit volfile manually, but cannot explain my feeling
14:03 JoeJulian Nah
14:03 JoeJulian okajimmy: ,,(paste) the entire output from "glusterd --debug" there's something that's missing from that previous paste attempt.
14:03 glusterbot okajimmy: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
14:06 coredump joined #gluster
14:06 unclemarc joined #gluster
14:07 okajimmy glusterd --debug
14:07 okajimmy [2016-04-12 14:06:14.076280] I [MSGID: 100030] [glusterfsd.c:2332:main] 0-glusterd: Started running glusterd version 3.7.10 (args: glusterd --debug)
14:07 okajimmy [2016-04-12 14:06:14.076363] D [logging.c:1792:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5
14:07 okajimmy [2016-04-12 14:06:14.076633] D [MSGID: 0] [glusterfsd.c:649:get_volfp] 0-glusterfsd: loading volume file /etc/glusterfs/glusterd.vol
14:07 okajimmy [2016-04-12 14:06:14.085906] I [MSGID: 106478] [glusterd.c:1337:init] 0-management: Maximum allowed open file descriptors set to 65536
14:07 okajimmy [2016-04-12 14:06:14.085958] I [MSGID: 106479] [glusterd.c:1386:init] 0-management: Using /var/lib/glusterd as working directory
14:07 okajimmy [2016-04-12 14:06:14.087241] D [MSGID: 0] [glusterd.c:410:glusterd_rpcsvc_options_build] 0-glusterd: listen-backlog value: 128
14:07 okajimmy joined #gluster
14:08 post-factum god
14:08 post-factum @paste
14:08 glusterbot post-factum: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
14:08 post-factum okajimmy: ^^ please
14:11 JoeJulian I haven't had to kick anyone for that in over a year. Let's not break that trend, please.
14:12 post-factum ye, it used to be lovely and friendly channel
14:12 vmallika joined #gluster
14:12 JoeJulian Pasting more than 2 or 3 lines in an IRC channel is considered bad form.
14:13 okajimmy sorry it's the first time i use IRC
14:14 JoeJulian That's why we give you instructions. :)
14:14 Romeor but you can freely paste more than 10. its ok
14:14 Romeor :D
14:14 Hesulan joined #gluster
14:15 JoeJulian @devoice Romeor
14:15 ahino joined #gluster
14:15 JoeJulian :P
14:15 okajimmy i just installed netcat , so i'll pape the output to which address
14:15 okajimmy ?
14:17 ndevos okajimmy: its really like: echo Hello | nc termbin.com 9999
14:18 ndevos that will return a URL that you can paste here :)
14:20 okajimmy http://termbin.com/9csf
14:20 okajimmy sorry again
14:20 Romeor :(
14:20 Romeor okay. 10 was not enough. try 100
14:21 ndevos okajimmy: this one seems to be the issue, "C" stands for "critical":
14:21 ndevos C [MSGID: 106425] [glusterd-store.c:2421:glusterd_store_retrieve_bricks] 0-management: realpath () failed for brick /gluster/brick2/lv_config. The underlying file system may be in bad state [No such file or directory]
14:21 post-factum first blood, JoeJulian
14:22 ndevos okajimmy: you should check if /gluster/brick2/lv_config is mounted on boot, check if it is in /etc/fstab and such
14:22 kshlm okajimmy, which version of gluster are you using?
14:23 ndevos kshlm: really? thats in the 1st line of the log output... 3.7.10
14:23 kshlm ndevos, I didn;t check
14:23 * ndevos understands it may have been long day over there :)
14:24 * kshlm has been up since 3.30AM this morning and is really sleepy
14:24 okajimmy i deleted the concerned line in fstab
14:25 ndevos kshlm: dont let us distract you, and get some dinner!
14:25 kshlm Well that's the regression that caused .11 to be made.
14:25 ndevos okajimmy: do you want to use the /gluster/brick2/lv_config brick, or is that supposed to be removed?
14:26 kshlm ndevos, Thanks. I'm signing out now. :)
14:26 okajimmy i dont care about it, i have already a data back :)
14:26 ndevos kshlm: good night!
14:26 Romeor left #gluster
14:27 Romeor joined #gluster
14:28 ndevos okajimmy: ah, as kshlm mentioned, this is most likely due to a known problem in 3.7.10, see the known issues on https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.10.md
14:28 glusterbot Title: glusterfs/3.7.10.md at release-3.7 · gluster/glusterfs · GitHub (at github.com)
14:28 ppai joined #gluster
14:29 ndevos but I wonder why it complains about the brick path, and not about the /var/lib/glusterd/... path, but that could be a logging incorrectness too :-/
14:30 ppai joined #gluster
14:30 okajimmy okey i'll try the mentionned issues
14:31 ndevos okajimmy: there is a script that creates the missing directories, you could try that
14:31 wushudoin joined #gluster
14:33 DV joined #gluster
14:34 okajimmy after runing the script on the note, i have now the glusterd service running but not the glusterfsd
14:35 ndevos thats a step in the right direction then :)
14:35 okajimmy yeah yeah :D
14:36 ndevos if the directory /gluster/brick2/lv_config does not exist, the glusterfsd (brick) process would not be able to start
14:38 okajimmy it's already existing :/
14:45 kpease joined #gluster
14:47 okajimmy nothing same problem: glusterfsd fails
14:48 okajimmy i removed all volumes
14:48 ndevos okajimmy: what do you exactly mean with "glusterfsd fails"?
14:50 okajimmy the glusterfsd service fails to start
14:51 ndevos okajimmy: I'm not sure what the glusterfsd service is, on Fedora/CentOS/RHEL there is only a glusterd service
14:53 okajimmy really O_O  but am on centos 6.7, and i installed gluster from the glusterfs repo
14:53 farhorizon joined #gluster
14:54 post-factum if glusterd management is up and running, probably it is sufficient to start volume with force option?
14:54 ndevos hmm, strange, can you check what this returns? rpm -qf /etc/rc.d/init.d/glusterfsd
14:55 farhorizon joined #gluster
14:56 okajimmy rpm -qf returns ==> glusterfs-server-3.7.10-1.el6.x86_64
14:56 post-factum ndevos: in el7 there is separate glusterfsd.service
14:56 * ndevos scratches head
14:58 ndevos ah.
14:58 ndevos kkeithley_: should the glusterfsd.service from the RPMs not get pushed upstream at one point?
14:58 post-factum ndevos: not sure why it is needed. I thought glusterfsd is started by glusterd
15:01 ndevos post-factum: yes, and iirc it does not start anything, it only stops the brick processes on shutdown
15:02 ndevos post-factum: like http://pkgs.fedoraproject.org/cgit/rpms/glusterfs.git/tree/glusterfsd.service
15:02 glusterbot Title: rpms/glusterfs.git - glusterfs (at pkgs.fedoraproject.org)
15:03 okajimmy i ran the same script to fix the issue on the second node, surprise : both services are runnning -_-
15:03 ndevos wohoo \o/
15:03 okajimmy but just in one server hhhh :'(
15:04 okajimmy i'll try to create the volumes ...
15:04 kkeithley_ Legacy Fedora and EPEL RPMs have had a glusterfsd.{init,service} file. IIRC (and maybe I don't) they may be a JoeJulian thing. And no, it's never been sent upstream, because, as post-factum notes, not really needed. (And at least one consensus poll seemed to suggest don't even bother trying.)
15:04 kkeithley_ don't bother trying, it'll get shot down. ;-)
15:05 ndevos kkeithley_: well, there are different things here... a) restart glusterfsd on update, b) stop glusterfsd on shutdown/reboot
15:05 kkeithley_ mainly it's there in Fedora to for better shutdown. It's shouldn't be used to start glusterfsd
15:05 JoeJulian Because if the network stops before the bricks, the clients have to wait for ping-timeout.
15:05 ndevos kkeithley_: I think (b) is important to do, just to make sure that any outstanding I/O gets flushed - might prevent some self-heal later on>?
15:05 kkeithley_ s/to for/for/
15:05 glusterbot What kkeithley_ meant to say was: mainly it's there in Fedora for better shutdown. It's shouldn't be used to start glusterfsd
15:06 kkeithley_ well, we can always revisit the decision not to have it upstream. ISTR suggesting taking it out of Fedora and some people howled.
15:07 ndevos JoeJulian: hmm, you mean that bricks should stop on shutdown/reboot, so that clients detect the missing brick quicker?
15:07 JoeJulian yes
15:07 JoeJulian Waiting for ping-timeout is bad if it's unnecessary.
15:08 ndevos well, I would see that as a valid thing to do for all packages...
15:08 jmarley joined #gluster
15:09 muneerse joined #gluster
15:12 overclk joined #gluster
15:15 MrRobotto joined #gluster
15:17 MrRobotto hi guys. trying to setup a gluster 3.7 cluster on centos 6. mounting bricks results in "0-glusterfs-socket: 10023 is not a valid port identifier" with "net.ipv4.ip_local_reserved_ports = 10090,10023" in sysctl.conf
15:19 MrRobotto i found a couple of reported bugs in 3.4-alpha versions regarding that sysctl parameter parsing which were reported as fixed
15:19 MrRobotto as it seems, the bug(s) is/are still there
15:20 farhorizon joined #gluster
15:20 MrRobotto do you know any workarounds for this except setting the parameter to blank (which i can't)?
15:21 jmarley joined #gluster
15:22 JoeJulian Could you paste the rest of that error line so I can look at the source?
15:23 Gnomethrower joined #gluster
15:23 Apeksha joined #gluster
15:25 MrRobotto @JoeJulian [2016-04-12 15:10:54.265401] W [MSGID: 101082] [common-utils.c:2856:gf_ports_reserved] 0-glusterfs-socket: 10023 is not a valid port identifier
15:26 muneerse joined #gluster
15:26 bwerthmann joined #gluster
15:29 ivan_rossi left #gluster
15:29 muneerse2 joined #gluster
15:31 JoeJulian MrRobotto: Is there a warning right above that one?
15:32 MrRobotto just some info messages: http://pastebin.com/bmTwidab
15:32 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
15:38 JoeJulian Ok, sure, it's not a valid port identifier for the calling function on a client because a client will try to use ports <= GF_CLIENT_PORT_CEILING (1024)
15:38 spalai joined #gluster
15:40 MrRobotto those ip_local_reserved_ports are reserved for / used by a different daemon on that machine, i guess gluster should not complain about them
15:40 Hesulan joined #gluster
15:41 cliluw joined #gluster
15:44 DV joined #gluster
15:45 Gaurav_ joined #gluster
15:47 JoeJulian I guess I would just suggest you file a bug report asking to change the log level to debug when a reserved port exceeds GF_CLIENT_PORT_CEILING
15:47 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
15:47 MrRobotto ok, thank you
15:55 ItsMe` joined #gluster
16:02 dlambrig_ joined #gluster
16:09 skoduri joined #gluster
16:14 ItsMe` joined #gluster
16:17 jiffin joined #gluster
16:18 Leildin joined #gluster
16:18 luizcpg joined #gluster
16:22 nathwill joined #gluster
16:24 jiffin joined #gluster
16:30 okajimmy THANKS all of you, gluster just started as expected (perfectly), i just stopped, removed the volumes and restarted the glusterd & glusterfsd.
16:34 post-factum should gluster reuse bricks ports on volume creation? i mean, lets say i create volume A, then volume B, then delete volume A, then create volume C, volume C won't use ports for bricks that were used by A
16:40 JoeJulian It could.
16:40 JoeJulian okajimmy: Yay! Glad we could help!
16:41 post-factum JoeJulian: how?
16:42 Leildin Hi guys, I was wondering, is there any problem with trying to chroot a ftp service into a gluster mounted volume ? I can't seem to get it to happen ...
16:42 JoeJulian post-factum: I think if glusterd has been restarted since volume A was deleted, it will reset the last-port-used counter.
16:43 post-factum JoeJulian: ah, ok, will check that on next update
16:43 JoeJulian Leildin: Should be no problem. I've used chroot hundreds of times on gluster fuse mounts.
16:43 JoeJulian selinux perhaps?
16:44 Leildin hmmm ... I'll check but does chroot have a requisite of root ownership on folders up to the subfolder used ?
16:45 Leildin I'm wondering if the nobody nobody ownership of folders is the problem
16:47 JoeJulian Nope, but doesn't nobody:nobody have to have r-x permissions on all the parent folders? It depends on how it's implemented, I think.
16:49 cliluw joined #gluster
16:55 Leildin I need to have words with the guy who invented selinux
16:56 Leildin it works fine with selinux off and there's no reason why it's blocking anything I'm doing
16:56 Leildin thanks Joe
16:56 JoeJulian You're welcome.
16:56 JoeJulian selinux support through fuse has been a problem that's being addressed.
16:57 JoeJulian bug 1272868
16:57 glusterbot Bug https://bugzilla.redhat.com:443/show_bug.cgi?id=1272868 medium, unspecified, ---, pmoore, ASSIGNED , RFE: Add support for filesystem subtypes in SELinux
16:58 JoeJulian also https://github.com/gluster/glusterfs-specs/blob/master/accepted/SELinux-client-support.md
16:58 glusterbot Title: glusterfs-specs/SELinux-client-support.md at master · gluster/glusterfs-specs · GitHub (at github.com)
17:13 rastar joined #gluster
17:18 bennyturns joined #gluster
17:24 dencaval joined #gluster
18:02 hagarth joined #gluster
18:12 B21956 joined #gluster
18:12 chirino joined #gluster
18:19 dlambrig_ joined #gluster
18:21 cliluw joined #gluster
18:46 okoko joined #gluster
18:59 mhulsman joined #gluster
19:17 B21956 joined #gluster
19:53 kpease joined #gluster
19:57 kpease joined #gluster
20:11 DV joined #gluster
20:19 bowhunter joined #gluster
20:24 rwheeler joined #gluster
20:43 Philambdo joined #gluster
20:43 Philambdo left #gluster
20:59 bwerthmann joined #gluster
21:00 post-factum any of you tried to change inode size on ext4 without recreating the fs? just curious if it really works, was asked by some user about it
21:00 post-factum JoeJulian: you, maybe?
21:01 petan joined #gluster
21:01 JoeJulian Nope, never.
21:01 post-factum i believe this should be painful in time and io
21:01 JoeJulian Yeah, doesn't seem worth it.
21:01 post-factum but tune2fs has -I new_inode_size option, however, it is not even documented in man
21:02 post-factum ok, i don't care nevertheless ;)
21:03 v12aml joined #gluster
21:06 post-factum hmm, i can try that for fs in file. why not
21:07 post-factum Changing the inode size not supported for filesystems with the flex_bg feature enabled.
21:07 post-factum haa, there might be dragons
21:08 post-factum ext4 without flex_bg is pretty useless as ext4
21:09 post-factum Clearing the flex_bg flag would cause the the filesystem to be inconsistent.
21:09 post-factum so, JoeJulian, it is not only time and i/o consuming
21:09 post-factum it may eat your data
21:09 JoeJulian details
21:10 post-factum what could i clarify for you?
21:11 jbrooks joined #gluster
21:12 petan joined #gluster
21:12 post-factum https://patchwork.ozlabs.org/patch/10004/ i believe it is not possible even to turn that option off now
21:12 glusterbot Title: [e2fsprogs] tune2fs: refuse to unmark flex_bg via clear_ok_features - Patchwork (at patchwork.ozlabs.org)
21:14 JoeJulian post-factum: details as in, "Meh, that's just a minor detail. Who would care?"
21:14 JoeJulian sarcasm
21:15 post-factum oh, i need that sheet with "sarcasm" word on it
21:16 post-factum anyway, recreating fs without flex_bg option and changing inode size worked. however, even on small and empty fs it took a while
21:16 post-factum who would even butt with that
21:23 JoeJulian I have no idea. I don't even see the value in changing it.
21:24 JoeJulian Unless your files are tiny and your metadata is large and you're actually running out of inodes from metadata.
21:28 post-factum that is some unbelieveable corner case
21:29 post-factum s/unbelieveable/unbelievable/
21:29 glusterbot What post-factum meant to say was: that is some unbelievable corner case
21:29 post-factum one would prefer to use some database instead
21:31 post-factum (meanwhile consider reading epic thread on ceph ML about deprecating ext4 support)
21:37 bowhunter joined #gluster
21:44 skylar joined #gluster
21:49 ahino joined #gluster
21:49 julim joined #gluster
21:49 hackman post-factum: link to the thread ? :)
21:50 plarsen joined #gluster
21:50 chirino joined #gluster
22:04 nathwill joined #gluster
22:05 post-factum hackman: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-April/thread.html
22:05 glusterbot Title: The ceph-users April 2016 Archive by thread (at lists.ceph.com)
22:05 post-factum ctrl+f "deprecating"
22:07 DV joined #gluster
22:10 bennyturns joined #gluster
22:14 luizcpg joined #gluster
22:18 m0zes joined #gluster
22:39 cliluw joined #gluster
22:57 chirino_m joined #gluster
22:59 DV joined #gluster
23:30 bowhunter joined #gluster
23:30 bennyturns joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary