Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-17

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:09 plarsen joined #gluster
00:46 DV joined #gluster
00:49 harish joined #gluster
01:00 dlambrig joined #gluster
01:02 lkoranda joined #gluster
01:02 xavih joined #gluster
01:02 malevolent joined #gluster
01:13 ninkotech joined #gluster
01:28 topshare joined #gluster
01:29 samikshan joined #gluster
01:38 dlambrig joined #gluster
01:54 harish joined #gluster
02:01 al joined #gluster
02:01 nangthang joined #gluster
02:02 7GHAAY37Y joined #gluster
02:08 overclk joined #gluster
02:10 Lee1092 joined #gluster
02:20 sbaugh joined #gluster
02:38 dlambrig joined #gluster
02:57 bennyturns joined #gluster
02:58 gildub joined #gluster
03:02 pdrakeweb joined #gluster
03:02 haomaiwa_ joined #gluster
03:10 al joined #gluster
03:15 sbaugh so.... does anyone here know a networked filesystem that just has the pretty basic function of allowing me to explicitly say, "cache this file to the local disk and if the network goes down, keep running the filesystem with all the fully-cached files present"
03:15 sbaugh can gluster do that?
03:28 catern joined #gluster
03:29 catern (still want an answer)
03:37 nbalacha joined #gluster
03:38 [7] joined #gluster
03:43 msvbhat joined #gluster
03:44 RobertLaptop joined #gluster
03:45 shubhendu joined #gluster
03:47 itisravi joined #gluster
03:48 jwd joined #gluster
03:51 jwaibel joined #gluster
03:53 overclk joined #gluster
03:58 nishanth joined #gluster
04:01 haomaiwa_ joined #gluster
04:03 atinm joined #gluster
04:10 gem joined #gluster
04:19 kshlm joined #gluster
04:23 kkeithley1 joined #gluster
04:23 sakshi joined #gluster
04:24 kanagaraj joined #gluster
04:25 ramteid joined #gluster
04:25 neha joined #gluster
04:26 sripathi joined #gluster
04:31 yazhini joined #gluster
04:36 aravindavk joined #gluster
04:40 RameshN joined #gluster
04:41 Manikandan joined #gluster
04:41 meghanam joined #gluster
04:43 ashiq joined #gluster
04:48 baojg joined #gluster
04:49 ramky joined #gluster
04:50 haomaiwang joined #gluster
04:52 vimal joined #gluster
04:57 deepakcs joined #gluster
05:02 haomaiwa_ joined #gluster
05:10 ndarshan joined #gluster
05:12 overclk joined #gluster
05:17 Bhaskarakiran joined #gluster
05:23 kkeithley1 joined #gluster
05:25 overclk joined #gluster
05:31 hgowtham joined #gluster
05:38 rafi joined #gluster
05:46 dlambrig joined #gluster
05:56 Manikandan joined #gluster
05:59 Bhaskarakiran joined #gluster
06:02 haomaiwang joined #gluster
06:08 mbukatov joined #gluster
06:12 meghanam joined #gluster
06:12 corretico joined #gluster
06:13 vmallika joined #gluster
06:14 raghu joined #gluster
06:18 poornimag joined #gluster
06:21 Manikandan joined #gluster
06:21 overclk joined #gluster
06:23 Saravana_ joined #gluster
06:35 jtux joined #gluster
06:37 rafi1 joined #gluster
06:45 jockek joined #gluster
06:49 nbalacha joined #gluster
07:01 jcsp joined #gluster
07:02 haomaiwa_ joined #gluster
07:03 baojg joined #gluster
07:05 nangthang joined #gluster
07:12 jiffin joined #gluster
07:16 deniszh joined #gluster
07:16 doekia joined #gluster
07:17 Slashman joined #gluster
07:24 rafi joined #gluster
07:26 rafi2 joined #gluster
07:27 elico joined #gluster
07:35 meghanam joined #gluster
07:39 haomai___ joined #gluster
07:40 haomaiwang joined #gluster
07:49 rafi joined #gluster
07:50 maveric_amitc_ joined #gluster
07:51 fsimonce joined #gluster
07:54 tanuck joined #gluster
07:58 nbalacha joined #gluster
07:58 * Romeor sees dead people
08:00 ctria joined #gluster
08:01 Trefex joined #gluster
08:02 haomaiwa_ joined #gluster
08:02 jcastill1 joined #gluster
08:03 Trefex good morning rafi
08:07 jcastillo joined #gluster
08:07 LebedevRI joined #gluster
08:13 overclk joined #gluster
08:24 rjoseph joined #gluster
08:28 gletessier_ joined #gluster
08:33 jcastill1 joined #gluster
08:38 rafi Trefex: gud morning
08:39 jcastillo joined #gluster
08:39 baojg joined #gluster
08:42 Alex-31 joined #gluster
08:42 Alex-31 hello
08:42 glusterbot Alex-31: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
08:45 anonymus joined #gluster
08:47 Alex-31 I have a test plateform between 4 different site. The network between this link is a Wan connection with a low bandwith. Is it possible to tune to performance with cache other that performance.cache-size (i have put 1Gb)
08:47 Alex-31 ? actually, for a size folder of  940MB and 144 files/directory, a ls -l command is executed in  more of 6 seconds ....
08:49 overclk joined #gluster
08:51 anonymus hi guys
08:52 anonymus could you please explain me why I see no files on one node brick of a distribute replicated volume?
08:54 gletessier Hi guy, did you do a "gluster volume status" to check your brick status?
08:54 anonymus yes
08:54 anonymus moment
08:54 anonymus Brick                : Brick be8:/sdc1
08:54 anonymus online
08:54 anonymus rw
08:55 anonymus ls /sdc1 | total 0
08:55 gletessier you have only one brick on your volume? because i dont see nothing in your line
08:55 anonymus moment
08:56 gletessier https://public.pad.fsfe.org/p/anonymus
08:56 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
08:56 gletessier lol
08:57 anonymus http://pastebin.com/XyXtH3a3
08:57 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
08:58 anonymus here is the status of vol
08:58 anonymus 'find' finds files on all the nodes besides be8
09:00 neha joined #gluster
09:00 gletessier i'm trying to discuss with you directly
09:02 haomaiwa_ joined #gluster
09:02 anonymus so
09:02 anonymus gletessier:
09:04 Norky joined #gluster
09:09 skoduri joined #gluster
09:09 kkeithley1 joined #gluster
09:12 rafi Trefex: can you file a bug for this in bugzilla for glusterfs component ?
09:12 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
09:15 autoditac_ joined #gluster
09:16 mdavidson joined #gluster
09:17 kbyrne joined #gluster
09:19 overclk joined #gluster
09:21 overclk joined #gluster
09:30 overclk joined #gluster
09:36 tanuck_ joined #gluster
09:37 itisravi_ joined #gluster
09:40 skoduri joined #gluster
09:42 itisravi joined #gluster
09:46 c0m0 joined #gluster
09:46 shubhendu joined #gluster
09:47 schandra joined #gluster
09:47 overclk joined #gluster
09:48 schandra hchiramm_: ping, the recent build has successfully passed
09:50 RameshN joined #gluster
09:52 ndarshan joined #gluster
09:52 nishanth joined #gluster
09:55 overclk joined #gluster
09:57 harish joined #gluster
09:57 hchiramm_ schandra++ awesome !
09:57 glusterbot hchiramm_: schandra's karma is now 1
09:57 hchiramm_ thanks schandra!!!
10:02 haomaiwa_ joined #gluster
10:02 schandra hchiramm_: cool :)
10:17 atinm joined #gluster
10:20 anil joined #gluster
10:25 baojg joined #gluster
10:28 sripathi joined #gluster
10:33 Manikandan_ joined #gluster
10:49 ira joined #gluster
10:51 overclk joined #gluster
10:56 kkeithley1 joined #gluster
11:01 ndarshan joined #gluster
11:02 haomaiwa_ joined #gluster
11:04 shubhendu joined #gluster
11:06 xor_23 joined #gluster
11:07 gem_ joined #gluster
11:09 xor_23 is anyone available to answer a quick question?
11:09 Manikandan_ joined #gluster
11:10 yazhini joined #gluster
11:12 skoduri joined #gluster
11:15 plarsen joined #gluster
11:17 Saravana_ joined #gluster
11:18 hchiramm_ joined #gluster
11:19 kkeithley1 joined #gluster
11:29 atinm joined #gluster
11:37 B21956 joined #gluster
11:39 firemanxbr joined #gluster
11:39 shyam joined #gluster
11:42 jrm16020 joined #gluster
11:42 overclk joined #gluster
11:43 jcastill1 joined #gluster
11:48 ramky joined #gluster
11:48 jcastillo joined #gluster
11:53 overclk joined #gluster
12:03 rafi1 joined #gluster
12:06 rafi joined #gluster
12:07 unclemarc joined #gluster
12:09 rafi1 joined #gluster
12:11 jtux joined #gluster
12:13 nbalacha joined #gluster
12:14 dlambrig joined #gluster
12:16 Manikandan joined #gluster
12:16 poornimag joined #gluster
12:17 Xtreme joined #gluster
12:17 Xtreme Hello
12:17 glusterbot Xtreme: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:17 Xtreme I have Gluster running on two servers
12:17 Xtreme but everytime i reboot, my /storage-pool is empty
12:17 Xtreme my files go missng.
12:17 Xtreme any idea what can i do?
12:18 Xtreme now my files are missing and dont know where to look for them..
12:18 Xtreme Anyone?
12:19 Saravana_ joined #gluster
12:22 kkeithley1 joined #gluster
12:28 jcastill1 joined #gluster
12:28 overclk joined #gluster
12:30 hchiramm_ joined #gluster
12:34 jcastillo joined #gluster
12:34 yazhini joined #gluster
12:40 ramky joined #gluster
12:40 pdrakeweb joined #gluster
12:44 dblack joined #gluster
12:44 hgowtham joined #gluster
13:00 meghanam joined #gluster
13:07 chirino joined #gluster
13:10 corretico joined #gluster
13:13 dblack joined #gluster
13:13 haomaiwang joined #gluster
13:14 dblack joined #gluster
13:15 dblack joined #gluster
13:15 mpietersen joined #gluster
13:16 dblack joined #gluster
13:18 DV joined #gluster
13:19 overclk joined #gluster
13:27 _Bryan_ joined #gluster
13:29 rafi joined #gluster
13:31 julim joined #gluster
13:32 shyam joined #gluster
13:33 bennyturns joined #gluster
13:39 DV joined #gluster
13:40 dgandhi joined #gluster
13:40 DV__ joined #gluster
13:41 plarsen joined #gluster
13:42 squizzi joined #gluster
13:42 dusmant joined #gluster
13:44 social joined #gluster
13:45 harold joined #gluster
13:51 muneerse joined #gluster
13:52 dlambrig joined #gluster
14:02 haomaiwa_ joined #gluster
14:19 DV joined #gluster
14:19 overclk joined #gluster
14:23 aaronott joined #gluster
14:24 JoeJulian_ joined #gluster
14:24 harish joined #gluster
14:29 tertiary joined #gluster
14:30 tertiary I currently have a distributed replicated volume working. Is it possible to convert this to distributed replicated striped without rebuilding the entire volume?
14:30 mpietersen joined #gluster
14:38 timotheus1 joined #gluster
14:41 cholcombe joined #gluster
14:43 baojg joined #gluster
14:43 muneerse2 joined #gluster
14:49 overclk joined #gluster
14:59 jcastill1 joined #gluster
15:02 corretico joined #gluster
15:02 ramky joined #gluster
15:02 haomaiwa_ joined #gluster
15:02 shaunm joined #gluster
15:03 corretico_ joined #gluster
15:04 jcastillo joined #gluster
15:07 neofob joined #gluster
15:17 glusterdude joined #gluster
15:21 chirino joined #gluster
15:24 glusterdude Looking for some input on node removal times...we have a brick with about 3.5TB of data in it that we are removing. The remove-brick command has been running for 5 days now to move what I would consider a relatively small amount of data. Does that sound normal?
15:27 wushudoin| joined #gluster
15:30 topshare joined #gluster
15:31 cyberswat joined #gluster
16:05 overclk joined #gluster
16:07 ipmango joined #gluster
16:09 rafi joined #gluster
16:12 techsenshi joined #gluster
16:12 techsenshi after performance a volume rebalance, do I still need to run a rebalance fix-layout?
16:13 techsenshi looking for documentation and its a little unclear
16:16 jdossey joined #gluster
16:32 dlambrig_ joined #gluster
16:35 aravindavk joined #gluster
16:35 cornfed78 joined #gluster
16:36 cornfed78 Hi all. I'm wondering if I can get some upgrade advice.
16:36 Rapture joined #gluster
16:37 cornfed78 Going from 3.6.3 -> 3.7.3
16:37 cornfed78 I have a 3-node replica setup. Do I need to schedule downtime for all three nodes simultaneously? Or can I upgrade them one at a time?
16:41 overclk joined #gluster
16:46 calavera joined #gluster
16:48 plarsen joined #gluster
16:56 pdrakeweb joined #gluster
17:00 dijuremo @cornfed78: You should be able to live upgrade to 3.7.3, however you will need to change some settings in your volumes...
17:01 cornfed78 Do you have any links to the documentation? I have this:
17:01 cornfed78 http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.7
17:01 cornfed78 is there anything more specific that I need to do?
17:01 dijuremo gluster volume set <volname> server.allow-insecure on
17:02 cornfed78 ah, ok
17:02 cornfed78 i think i read about that in some of the IRC logs
17:02 cornfed78 thought that only applied to the 3.7.2 release of gluster
17:02 dijuremo Might also need: rpc-auth-allow-insecure on
17:02 dijuremo <= 3.7.2 The change took effect in 3.7.3
17:03 dijuremo So the way I have usually done rolling upgrades is as follows:
17:04 dijuremo 1. Stop gluster on the node to be upgraded
17:04 dijuremo 2. Update software (if there is a kernel upgrade, then disable gluster on boot, and reboot server)
17:04 dijuremo 3. If you rebooted, then manually start gluster, check that it is connecting gluster peer status
17:04 dijuremo 4. If all good, then re-enalbe gluster on boot and your are good....
17:05 cornfed78 awesome- thanks!
17:05 dijuremo 5. Wait for volume to heal on upgraded node (check with gluster volume heal VOLNAME info)
17:05 dijuremo 6. Repeat for node 2, then node 3
17:05 cornfed78 thanks for that..
17:05 dijuremo Now, I do not have quotas or any other crazy stuff, for quotas, etc the documentation says you need to do other stuff...
17:06 cornfed78 I saw that.. I don't use quotas..
17:06 cornfed78 I do have sparse images, though.. But i'm moving them off onto single-brick volumes so that the heal doesn't inflate them
17:07 cornfed78 Anyone aware of any issues with Gluster 3.7.3 & oVirt 3.5.3?
17:07 kovshenin joined #gluster
17:08 dijuremo @cornfed78 if you upgrade, do let me know about gluster 3.7.3 and ovirt 3.5.3, I am running a replica pair with self-hosted engine and a third brick-less peer for quorum.
17:08 cornfed78 will do
17:09 cornfed78 will need to plan this out.. hoping to do it so that it doesn't involve downtime for any VMs in oVirt..
17:09 dijuremo @cornfed78 you running self-hosted engine too?
17:09 cornfed78 no... I think that makes things a bit easier, thankfully
17:09 dijuremo @cornfed78 hell of a lot easier....
17:10 cornfed78 thanks for the help! I'll try and find you when I'm done..
17:10 dijuremo @cornfed78 I am running self-hosted engine and AD controllers on ovirt which provide DNS and user account space for the hosting servers, so when gluster goes down, everything is just dead...
17:10 cornfed78 gotta run out for a bit.. really appreciate the input
17:11 cornfed78 yeah.. I have our IDM system on there.. same issue.. :(
17:11 cornfed78 the sparse image thing is enough to make me want to try upgrading though
17:12 cholcombe joined #gluster
17:15 jrm16020 joined #gluster
17:16 cholcombe joined #gluster
17:16 Rapture joined #gluster
17:21 skoduri joined #gluster
17:22 jrm16020 joined #gluster
17:24 overclk joined #gluster
17:28 jwd joined #gluster
17:29 catern does anyone know of a networked filesystem where I can manually specify files to be downloaded/cached locally, and the filesystem will stay up with just those files if the network goes down?
17:29 catern maybe gluster can do this? :)
17:29 catern (i ask here since I'm sure you guys are at the cutting edge of all this stuff)
17:30 jwaibel joined #gluster
17:32 jrm16020 joined #gluster
17:35 jrm16020 joined #gluster
17:43 chirino joined #gluster
17:47 jdossey catern: afaik gluster doesn't work like that, but the Coda filesystem may be useful for you
17:47 jdossey https://en.wikipedia.org/wiki/Coda_%28file_system%29
17:47 glusterbot Title: Coda (file system) - Wikipedia, the free encyclopedia (at en.wikipedia.org)
17:55 trav408 joined #gluster
17:58 oytun joined #gluster
17:58 oytun Hello everybody!
17:58 oytun I am encountering a weird problem with our glusterfs structure.
18:00 oytun When the application tries to rename a directory, glusterfs fails:
18:00 oytun [2015-08-17 16:58:54.690589] W [fuse-bridge.c:1725:fuse_rename_cbk] 0-glusterfs-fuse: 46824: /packages/_3421 -> /packages/3421 => -1 (Directory not empty)
18:00 oytun So, apparently directory is not empy and it is RIGHT.
18:01 oytun however, server's own file system (ext3), for example, doesn't say anything in these cases.
18:01 oytun it simply overwrites the target directory.
18:01 oytun However, it can't overwrite the target in glusterFS'ed mount.
18:02 oytun any ideas how to overcome this?
18:02 oytun I know renaming should not allow target overwriting, normally. However, how does the system's own file system allow it then?! I am confused.
18:04 oytun Here is the log of this operation:
18:04 oytun https://paste.fedoraproject.org/255950/43983466/
18:04 glusterbot Title: #255950 Fedora Project Pastebin (at paste.fedoraproject.org)
18:06 oytun I searched on how to enable something like auto-ovewrite in rename commands both in glusterfs or XFS or fuse... nope :(
18:14 pdrakeweb joined #gluster
18:16 calavera joined #gluster
18:21 Gill joined #gluster
18:23 oytun I also checked channel logs, no luck
18:24 catern jdossey: gosh dang it :)
18:25 catern jdossey: I know about Coda, I'm from CMU, but Coda is research software, not really usable :)
18:40 aaronott joined #gluster
18:43 oytun anybody has an idea? I keep digging it.
18:48 zarcos joined #gluster
18:49 jwd joined #gluster
18:49 zarcos Greetings! Is there anything special about upgrading Gluster nodes from 3.5.2 to 3.5.5? I'm not seeing any instructions in the Release notes for those releases, so I assume a rolling update (replication 3) should be adequate?
18:53 zarcos I was going to upgrade to 3.6, but I'd rather put that off since it requires downtime. My initial trial of that didn't go well...
18:53 oytun I upgraded our nodes from 3.5 to 3.7, nothing bad happened
18:54 oytun we don't have anyhing very extraordinary in our configuration, though. just two nodes so far.
18:54 oytun --- my rename overwriting issue is apparent not related to XFS, because on actual node server, I can do "mv" and it will overwrite target.
18:54 glusterbot oytun: -'s karma is now -345
19:02 catern hahahaha
19:03 unclemarc joined #gluster
19:05 pdrakeweb joined #gluster
19:07 techsenshi after performance a volume rebalance, do I still need to run a rebalance fix-layout?
19:16 l0uis catern: use vmtouch to keep the data in the os file system cache maybe?
19:17 pdrakewe_ joined #gluster
19:17 * l0uis doesn't trust 3.7 yet
19:22 glusterdude joined #gluster
19:23 catern l0uis: will that keep working when the network goes down for gluster?
19:26 l0uis beats me, try it out :)
19:35 unclemarc joined #gluster
19:39 gletessier__ joined #gluster
19:45 deniszh joined #gluster
20:07 pdrakeweb joined #gluster
20:08 jbrooks joined #gluster
20:08 tanuck joined #gluster
20:28 DV joined #gluster
20:48 badone_ joined #gluster
20:52 klaxa joined #gluster
21:10 [o__o] joined #gluster
21:23 shyam joined #gluster
21:49 jdossey joined #gluster
21:59 victori joined #gluster
22:02 7GHAAZC4Y joined #gluster
22:05 klaas_ joined #gluster
22:43 Rapture joined #gluster
23:11 gildub joined #gluster
23:20 calavera joined #gluster
23:36 jdossey joined #gluster
23:58 cyberswat joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary