Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-27

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:20 msmith_ joined #gluster
00:22 Rydekull joined #gluster
00:48 cleo_ joined #gluster
01:08 gildub joined #gluster
01:08 topshare joined #gluster
01:10 msmith_ joined #gluster
01:20 meghanam_ joined #gluster
01:20 meghanam joined #gluster
01:24 cleo_ i tried gluster command 'gluster volume log locate [volname]'
01:24 cleo_ then cmd comes out like this  'Usage: volume log <VOLNAME> rotate [BRICK]'
01:24 cleo_ what wrong with it?
01:27 Telsin joined #gluster
01:36 cyberbootje joined #gluster
01:38 edwardm61 joined #gluster
01:48 meghanam joined #gluster
01:48 meghanam_ joined #gluster
01:50 bit4man joined #gluster
01:55 harish_ joined #gluster
02:02 haomaiwa_ joined #gluster
02:45 meghanam joined #gluster
02:45 meghanam_ joined #gluster
03:02 hagarth joined #gluster
03:08 marcoceppi joined #gluster
03:18 meghanam joined #gluster
03:18 meghanam_ joined #gluster
03:43 soumya_ joined #gluster
03:49 RameshN joined #gluster
03:56 marcoceppi joined #gluster
03:58 shubhendu joined #gluster
04:00 itisravi joined #gluster
04:05 marcus_ joined #gluster
04:05 marcus_ joined #gluster
04:06 kke_ joined #gluster
04:06 prasanth|afk joined #gluster
04:07 natgeorg joined #gluster
04:08 coredump|br joined #gluster
04:09 and`_ joined #gluster
04:10 ron-slc joined #gluster
04:10 marcoceppi joined #gluster
04:10 marcoceppi joined #gluster
04:10 samppah joined #gluster
04:11 scuttle` joined #gluster
04:11 mikedep333 joined #gluster
04:13 anoopcs joined #gluster
04:14 msmith_ joined #gluster
04:17 lalatenduM joined #gluster
04:21 wushudoin joined #gluster
04:23 marcoceppi joined #gluster
04:26 deepakcs joined #gluster
04:31 ArminderS joined #gluster
04:32 jiffin joined #gluster
04:32 kanagaraj joined #gluster
04:36 ArminderS joined #gluster
04:38 ndarshan joined #gluster
04:38 nbalachandran joined #gluster
04:40 ArminderS- joined #gluster
04:41 pp joined #gluster
04:42 kdhananjay joined #gluster
04:42 ArminderS joined #gluster
04:43 marcoceppi joined #gluster
04:45 coredump joined #gluster
04:49 marcoceppi joined #gluster
04:49 rafi joined #gluster
04:49 kdhananjay left #gluster
04:51 lalatenduM joined #gluster
04:57 ppai joined #gluster
04:59 kdhananjay joined #gluster
04:59 hagarth joined #gluster
05:00 atinmu joined #gluster
05:01 jiffin1 joined #gluster
05:06 kshlm joined #gluster
05:12 meghanam_ joined #gluster
05:12 meghanam joined #gluster
05:19 anil joined #gluster
05:20 side_control joined #gluster
05:24 spandit joined #gluster
05:26 smohan joined #gluster
05:38 dusmant joined #gluster
05:39 bala joined #gluster
05:40 sahina joined #gluster
05:53 overclk joined #gluster
05:54 kumar joined #gluster
05:55 marcoceppi joined #gluster
05:57 side_control joined #gluster
06:06 marcoceppi joined #gluster
06:06 marcoceppi joined #gluster
06:09 ricky-ticky joined #gluster
06:20 dusmant joined #gluster
06:28 atalur joined #gluster
06:35 kke_ is it possible to have two versions of glusterfs-client at the same time?
06:48 ppai joined #gluster
06:48 ArminderS- joined #gluster
06:50 ArminderS joined #gluster
06:56 ArminderS- joined #gluster
06:56 ctria joined #gluster
07:01 ArminderS joined #gluster
07:02 kke_ hmm looks like apt-get install -y glusterfs-client=3.4.6-1 glusterfs-client=3.2.7-3+deb7u1   passed without complaints
07:12 kke_ right but it only installs one of them
07:17 sputnik13 joined #gluster
07:17 kke_ chroot could be an option. or extracting the binary from the deb if that works
07:21 kke_ it's a bit annoying that new clients won't connect to old servers
07:21 nshaikh joined #gluster
07:21 corretico joined #gluster
07:22 sputnik13 joined #gluster
07:24 ArminderS- joined #gluster
07:31 ekuric joined #gluster
07:35 gildub joined #gluster
07:36 ArminderS joined #gluster
07:36 kovshenin joined #gluster
07:39 raghu` joined #gluster
07:48 Philambdo joined #gluster
07:51 m0ellemeister joined #gluster
07:53 sahina joined #gluster
07:54 ppai joined #gluster
07:54 LebedevRI joined #gluster
08:06 atalur joined #gluster
08:09 sahina joined #gluster
08:21 dusmant joined #gluster
08:22 genghi joined #gluster
08:23 genghi Hi.. any tips on tuning for small files in the range of 50K?
08:26 ricky-ticky joined #gluster
08:28 meghanam joined #gluster
08:28 meghanam_ joined #gluster
08:29 saurabh joined #gluster
08:33 liquidat joined #gluster
08:40 fsimonce joined #gluster
08:47 rolfb joined #gluster
08:48 nbalachandran joined #gluster
08:50 atinmu joined #gluster
08:50 gildub joined #gluster
08:55 hagarth joined #gluster
08:56 dusmant joined #gluster
09:00 ppai joined #gluster
09:09 sage_ joined #gluster
09:11 Bardack joined #gluster
09:12 ghenry joined #gluster
09:12 ghenry joined #gluster
09:15 harish_ joined #gluster
09:26 deniszh joined #gluster
09:32 Telsin joined #gluster
09:32 gildub joined #gluster
09:33 badone joined #gluster
09:37 atalur joined #gluster
09:37 rafi1 joined #gluster
09:38 dusmant joined #gluster
09:48 Pupeno joined #gluster
09:56 SOLDIERz joined #gluster
09:57 Telsin joined #gluster
09:57 SOLDIERz_ joined #gluster
09:58 glusterbot News from newglusterbugs: [Bug 1153610] libgfapi crashes in glfs_fini for RDMA type volumes <https://bugzilla.redhat.com/show_bug.cgi?id=1153610>
09:59 dusmant joined #gluster
10:06 nbalachandran joined #gluster
10:07 atinmu joined #gluster
10:14 hagarth joined #gluster
10:23 Telsin joined #gluster
10:29 mator genghi, google, but http://rhsummit.files.wordpress.com/2014/04/bengland_h_1100_rhs_performance.pdf and previous years sessions
10:42 RameshN joined #gluster
10:42 gildub joined #gluster
10:44 calisto joined #gluster
10:44 genghi mator: thanks
10:47 elico joined #gluster
10:49 ricky-ticky joined #gluster
10:50 ppai joined #gluster
10:50 Telsin joined #gluster
10:56 Fen2 joined #gluster
10:56 Fen2 Hi ! :) Can GlusterFS 3.6 be installed and run on CentOS 7 ?
10:59 glusterbot News from newglusterbugs: [Bug 1151384] Rebalance fails to complete - stale file handles <https://bugzilla.redhat.com/show_bug.cgi?id=1151384>
10:59 glusterbot News from newglusterbugs: [Bug 1168574] Partition disappearing <https://bugzilla.redhat.com/show_bug.cgi?id=1168574>
11:01 hagarth Fen2: yes
11:10 ricky-ticky joined #gluster
11:15 meghanam joined #gluster
11:15 meghanam_ joined #gluster
11:18 Telsin joined #gluster
11:21 meghanam joined #gluster
11:35 vimal joined #gluster
11:37 delhage joined #gluster
11:38 sage_ joined #gluster
11:44 haomaiwa_ joined #gluster
11:45 harish joined #gluster
11:50 sage_ joined #gluster
12:00 Telsin joined #gluster
12:04 diegows joined #gluster
12:10 ninkotech joined #gluster
12:10 ninkotech_ joined #gluster
12:15 Telsin joined #gluster
12:22 edward1 joined #gluster
12:23 itisravi_ joined #gluster
12:25 ppai joined #gluster
12:34 deniszh1 joined #gluster
12:36 deniszh joined #gluster
12:40 calisto joined #gluster
12:42 calum_ joined #gluster
12:58 Fen1 joined #gluster
13:02 Slashman joined #gluster
13:03 anoopcs joined #gluster
13:06 ninkotech joined #gluster
13:06 ninkotech_ joined #gluster
13:08 liquidat joined #gluster
13:17 vimal joined #gluster
13:18 smohan joined #gluster
13:22 vimal joined #gluster
13:46 liquidat joined #gluster
13:54 _Bryan_ joined #gluster
13:55 kshlm joined #gluster
13:58 meghanam joined #gluster
14:08 virusuy joined #gluster
14:08 virusuy joined #gluster
14:08 morse joined #gluster
14:09 virusuy joined #gluster
14:10 virusuy joined #gluster
14:10 virusuy joined #gluster
14:12 virusuy joined #gluster
14:12 virusuy joined #gluster
14:22 bala joined #gluster
14:27 pkoro joined #gluster
14:29 glusterbot News from newglusterbugs: [Bug 914874] Enhancement suggestions for BitRot hash computation <https://bugzilla.redhat.com/show_bug.cgi?id=914874>
14:30 Telsin joined #gluster
14:30 sputnik13 joined #gluster
14:50 georgeh-LT2 joined #gluster
14:58 mdavidson joined #gluster
14:58 shubhendu joined #gluster
15:00 glusterbot News from newglusterbugs: [Bug 831699] Handle multiple networks better <https://bugzilla.redhat.com/show_bug.cgi?id=831699>
15:00 glusterbot News from newglusterbugs: [Bug 885424] File operations occur as root regardless of original user on 32-bit nfs client <https://bugzilla.redhat.com/show_bug.cgi?id=885424>
15:00 glusterbot News from newglusterbugs: [Bug 905747] [FEAT] Tier support for Volumes <https://bugzilla.redhat.com/show_bug.cgi?id=905747>
15:00 glusterbot News from newglusterbugs: [Bug 907540] Gluster fails to start many volumes <https://bugzilla.redhat.com/show_bug.cgi?id=907540>
15:00 glusterbot News from newglusterbugs: [Bug 911361] Bricks grow when other bricks heal <https://bugzilla.redhat.com/show_bug.cgi?id=911361>
15:00 glusterbot News from newglusterbugs: [Bug 914804] [FEAT] Implement volume-specific quorum <https://bugzilla.redhat.com/show_bug.cgi?id=914804>
15:00 glusterbot News from newglusterbugs: [Bug 915996] [FEAT] Cascading Geo-Replication Weighted Routes <https://bugzilla.redhat.com/show_bug.cgi?id=915996>
15:00 glusterbot News from newglusterbugs: [Bug 922542] [FEAT] Please add support to replace multiple bricks at a time. <https://bugzilla.redhat.com/show_bug.cgi?id=922542>
15:00 glusterbot News from newglusterbugs: [Bug 949096] [FEAT] : Inconsistent read on volume configured with cluster.quorum-type auto <https://bugzilla.redhat.com/show_bug.cgi?id=949096>
15:00 glusterbot News from newglusterbugs: [Bug 956247] Quota enforcement unreliable <https://bugzilla.redhat.com/show_bug.cgi?id=956247>
15:00 glusterbot News from newglusterbugs: [Bug 960141] NFS no longer responds, get  "Reply submission failed" errors <https://bugzilla.redhat.com/show_bug.cgi?id=960141>
15:00 glusterbot News from newglusterbugs: [Bug 960867] failover doesn't work when a hdd part of hardware raid massive becomes broken <https://bugzilla.redhat.com/show_bug.cgi?id=960867>
15:00 glusterbot News from newglusterbugs: [Bug 961197] glusterd fails to read from the nfs socket every 3 seconds if all volumes are set nfs.disable <https://bugzilla.redhat.com/show_bug.cgi?id=961197>
15:00 glusterbot News from newglusterbugs: [Bug 961506] getfattr can hang when trying to get an attribute that doesn't exist <https://bugzilla.redhat.com/show_bug.cgi?id=961506>
15:00 glusterbot News from newglusterbugs: [Bug 971528] Gluster fuse mount corrupted <https://bugzilla.redhat.com/show_bug.cgi?id=971528>
15:00 glusterbot News from newglusterbugs: [Bug 974886] timestamps of brick1 and brick2 is not the same. <https://bugzilla.redhat.com/show_bug.cgi?id=974886>
15:00 glusterbot News from newglusterbugs: [Bug 978297] Glusterfs self-heal daemon crash on split-brain replicate log too big <https://bugzilla.redhat.com/show_bug.cgi?id=978297>
15:00 glusterbot News from newglusterbugs: [Bug 981456] RFE: Please create an "initial offline bulk load" tool for data, for GlusterFS <https://bugzilla.redhat.com/show_bug.cgi?id=981456>
15:00 glusterbot News from newglusterbugs: [Bug 983676] 2.6.39-400.109.1.el6uek.x86_64 doesn't work with GlusterFS 3.3.1 <https://bugzilla.redhat.com/show_bug.cgi?id=983676>
15:00 glusterbot News from newglusterbugs: [Bug 984602] [FEAT] Add explicit brick affinity <https://bugzilla.redhat.com/show_bug.cgi?id=984602>
15:00 glusterbot News from newglusterbugs: [Bug 990220] Group permission with high GID Number (200090480) is not being honored by Gluster <https://bugzilla.redhat.com/show_bug.cgi?id=990220>
15:00 glusterbot News from newglusterbugs: [Bug 993433] Volume quota report is human readable only, not machine readable <https://bugzilla.redhat.com/show_bug.cgi?id=993433>
15:00 glusterbot News from newglusterbugs: [Bug 997206] [RFE] geo-replication to swift target <https://bugzilla.redhat.com/show_bug.cgi?id=997206>
15:00 glusterbot News from newglusterbugs: [Bug 997889] VM filesystem read-only <https://bugzilla.redhat.com/show_bug.cgi?id=997889>
15:00 glusterbot News from newglusterbugs: [Bug 1005616] glusterfs client crash (signal received: 6) <https://bugzilla.redhat.com/show_bug.cgi?id=1005616>
15:00 glusterbot News from newglusterbugs: [Bug 1005860] GlusterFS: Can't add a third brick to a volume - "Number of Bricks" is messed up <https://bugzilla.redhat.com/show_bug.cgi?id=1005860>
15:00 glusterbot News from newglusterbugs: [Bug 1005862] GlusterFS: Can't add a new peer to the cluster - "Number of Bricks" is messed up <https://bugzilla.redhat.com/show_bug.cgi?id=1005862>
15:00 glusterbot News from newglusterbugs: [Bug 1007346] gluster 3.4 write <https://bugzilla.redhat.com/show_bug.cgi?id=1007346>
15:00 glusterbot News from newglusterbugs: [Bug 1016482] Owner of some directories become root <https://bugzilla.redhat.com/show_bug.cgi?id=1016482>
15:00 glusterbot News from newglusterbugs: [Bug 1021998] nfs mount via symbolic link does not work <https://bugzilla.redhat.com/show_bug.cgi?id=1021998>
15:00 glusterbot News from newglusterbugs: [Bug 1023309] geo-replication command failed <https://bugzilla.redhat.com/show_bug.cgi?id=1023309>
15:00 glusterbot News from newglusterbugs: [Bug 1023636] Inconsistent UUID's not causing an error that would stop the system <https://bugzilla.redhat.com/show_bug.cgi?id=1023636>
15:00 glusterbot News from newglusterbugs: [Bug 1024181] Unicode filenames cause directory listing interactions to hang/loop <https://bugzilla.redhat.com/show_bug.cgi?id=1024181>
15:00 glusterbot News from newglusterbugs: [Bug 1029239] RFE: Rebalance information should include volume name and brick specific information <https://bugzilla.redhat.com/show_bug.cgi?id=1029239>
15:00 glusterbot News from newglusterbugs: [Bug 1031817] Setting a quota for the root of a volume changes the reported volume size <https://bugzilla.redhat.com/show_bug.cgi?id=1031817>
15:00 glusterbot News from newglusterbugs: [Bug 1040862] volume status detail command cause fd leak <https://bugzilla.redhat.com/show_bug.cgi?id=1040862>
15:00 glusterbot News from newglusterbugs: [Bug 1045426] geo-replication failed with: (xtime) failed on peer with OSError, when use non-privileged user <https://bugzilla.redhat.com/show_bug.cgi?id=1045426>
15:00 glusterbot News from newglusterbugs: [Bug 1070685] glusterfs ipv6 functionality not working <https://bugzilla.redhat.com/show_bug.cgi?id=1070685>
15:00 glusterbot News from newglusterbugs: [Bug 1086493] [RFE] - Add a default snapshot name when creating a snap <https://bugzilla.redhat.com/show_bug.cgi?id=1086493>
15:00 glusterbot News from newglusterbugs: [Bug 1086497] [RFE] - Upon snaprestore, immediately take a snapshot to provide recovery point <https://bugzilla.redhat.com/show_bug.cgi?id=1086497>
15:00 glusterbot News from newglusterbugs: [Bug 1087947] Feature request: configurable error reporting hook script <https://bugzilla.redhat.com/show_bug.cgi?id=1087947>
15:00 glusterbot News from newglusterbugs: [Bug 1095179] Gluster volume inaccessible on all bricks after a glusterfsd segfault on one brick <https://bugzilla.redhat.com/show_bug.cgi?id=1095179>
15:00 glusterbot News from newglusterbugs: [Bug 1109950] [feature] "gluster volume status" could report version <https://bugzilla.redhat.com/show_bug.cgi?id=1109950>
15:00 glusterbot News from newglusterbugs: [Bug 1116168] RFE: Allow geo-replication to slave Volume in same trusted storage pool <https://bugzilla.redhat.com/show_bug.cgi?id=1116168>
15:00 glusterbot News from newglusterbugs: [Bug 1147107] Cannot set distribute.migrate-data xattr on a file <https://bugzilla.redhat.com/show_bug.cgi?id=1147107>
15:00 glusterbot News from newglusterbugs: [Bug 950024] replace-brick immediately saturates IO on source brick causing the entire volume to be unavailable, then dies <https://bugzilla.redhat.com/show_bug.cgi?id=950024>
15:00 glusterbot News from newglusterbugs: [Bug 812342] [FEAT] inotify support <https://bugzilla.redhat.com/show_bug.cgi?id=812342>
15:00 glusterbot News from newglusterbugs: [Bug 847821] After disabling NFS the message "0-transport: disconnecting now" keeps appearing in the logs <https://bugzilla.redhat.com/show_bug.cgi?id=847821>
15:00 glusterbot News from newglusterbugs: [Bug 861947] Large writes in KVM host slow on fuse, but full speed on nfs <https://bugzilla.redhat.com/show_bug.cgi?id=861947>
15:00 glusterbot News from newglusterbugs: [Bug 903873] Ports show as N/A in status <https://bugzilla.redhat.com/show_bug.cgi?id=903873>
15:00 glusterbot News from newglusterbugs: [Bug 922801] Gluster not resolving hosts with IPv6 only lookups <https://bugzilla.redhat.com/show_bug.cgi?id=922801>
15:00 glusterbot News from newglusterbugs: [Bug 928781] hangs when mount a volume at own brick <https://bugzilla.redhat.com/show_bug.cgi?id=928781>
15:00 glusterbot News from newglusterbugs: [Bug 951177] glusterd silently fails if a peer file is empty <https://bugzilla.redhat.com/show_bug.cgi?id=951177>
15:00 glusterbot News from newglusterbugs: [Bug 963335] glusterd enters D state after replace-brick abort operation <https://bugzilla.redhat.com/show_bug.cgi?id=963335>
15:00 glusterbot News from newglusterbugs: [Bug 1044352] [RFE] Exempting a list of client IPs or the RHS servers themselves from anonymous uid and gid feature and/or from root squashing <https://bugzilla.redhat.com/show_bug.cgi?id=1044352>
15:00 glusterbot News from newglusterbugs: [Bug 1045992] [RFE] CTDB - GlusterFS NFS Monitor Script <https://bugzilla.redhat.com/show_bug.cgi?id=1045992>
15:00 glusterbot News from newglusterbugs: [Bug 764063] Debian package does not depend on fuse <https://bugzilla.redhat.com/show_bug.cgi?id=764063>
15:01 glusterbot News from newglusterbugs: [Bug 1093217] [RFE] Gluster module (purpleidea) to support HA installations using Pacemaker <https://bugzilla.redhat.com/show_bug.cgi?id=1093217>
15:08 B21956 joined #gluster
15:14 virusuy joined #gluster
15:14 virusuy joined #gluster
15:15 Telsin joined #gluster
15:21 mator http://www.reddit.com/r/linux/comments/2ndf5l/week_of_december_1st_kernel_developer_greg/
15:22 mator we still use 3.2.x version of glusterfs here ... =/
15:25 vimal joined #gluster
15:26 shubhendu joined #gluster
15:34 ikke- joined #gluster
15:37 ikke- Hello
15:37 glusterbot ikke-: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
15:37 Telsin joined #gluster
15:37 ikke- Anyone know a good method of changing IP addresses on two gluster hosts? (2T data)
15:39 _dist joined #gluster
15:45 RameshN joined #gluster
15:48 atrius joined #gluster
16:04 mator ikke- not that i'm being able to help, but how do you created your volume? I mean do you used IP addresses on peer/volume creation or hostnames?
16:04 mator https://bugzilla.redhat.com/show_bug.cgi?id=1049470
16:04 glusterbot Bug 1049470: unspecified, unspecified, ---, kaushal, CLOSED DUPLICATE, Gluster could do with a useful cli utility for updating host definitions
16:31 Slashman_ joined #gluster
16:40 lalatenduM joined #gluster
16:41 calisto joined #gluster
16:48 hagarth joined #gluster
16:59 bennyturns joined #gluster
17:05 cfeller can anyone answer why I see this:
17:06 cfeller [2014-11-27 16:57:24.858464] I [client-handshake.c:1474:client_setvolume_cbk] 0-gv0-client-0: Server and Client lk-version numbers are not same, reopening the fds
17:06 glusterbot cfeller: This is normal behavior and can safely be ignored.
17:06 cfeller well, thank you glusterbot... I was just a bit confused because everything is the same version.
17:10 RameshN joined #gluster
17:22 anoopcs joined #gluster
17:52 diegows joined #gluster
17:52 ArminderS joined #gluster
17:58 andreask joined #gluster
18:10 RameshN joined #gluster
18:19 _dist joined #gluster
18:20 elico joined #gluster
18:31 baoboa joined #gluster
18:44 virusuy joined #gluster
18:44 virusuy joined #gluster
18:46 LebedevRI joined #gluster
18:56 rotbeard joined #gluster
18:59 lalatenduM joined #gluster
19:18 andreask joined #gluster
19:34 ricky-ticky joined #gluster
19:37 Telsin joined #gluster
19:54 rshott joined #gluster
19:55 Telsin joined #gluster
20:07 ikke- mator: The host coneccts via IP
20:07 rshott left #gluster
20:08 ikke- mator: Not hostname in /etc/hosts
20:10 ikke- One way that works, it not very pretty but... I created loopback interfaces with the old IP on the servers and they connect and works nicely. But, I dont want to have to extra ifaces just because I cant change the IP in gluster...
20:12 ikke- Another way I tried was completly destroing the gluster volume and rebuilding it. This worked as well, small scale with no data och value.
20:12 partner hmm, not a definite answer but to my understanding you will need to bring down the volume and tune manually the files to point to new address
20:13 partner and while doing it, switch using hostnames
20:13 ikke- Im moving the servers off location so down time is not an issue. (3-4h)
20:14 ikke- Is it preffered to use hostname specified in the hosts file?
20:14 partner dns is the glue of the internet, don't you have one?
20:14 partner i would never ever touch hosts file personally as i think its one source of the evil in the first place
20:15 partner but, if it provides the "dns" then thats better than ip address
20:15 ikke- Yes I do, but I dont want to be dependent of the dns for my cluster to work
20:15 ikke- :)
20:16 partner imo if your dns is not working you have bigger issues than some storage..
20:16 partner anyways, as you already  know the ip address isn't way to go so at least then choose the hosts file
20:16 partner are you physically moving those servers or moving the bricks?
20:17 ikke- Physically
20:18 ikke- Is it okay to post links in chat?
20:19 partner sure, those will go unlike multiline paste
20:19 ikke- I tried this out, with great success, but I dont know how it works in larger scale.
20:19 ikke- http://www.ovirt.org/Change_network_interface_for_Gluster
20:20 ikke- Do you have an opinion about what he is sugesting and how it would afect a "larger" cluster?
20:21 partner nice thing here is that if you "delete" anything it actually doesn't go away
20:22 partner anything being for example volume, none of the files are touched on the volume
20:23 ikke- Thats my thought aswell. And if you have synced data on both sides, I should'nt take to long for gluster to understand that they have the same data right?
20:23 partner at least partially the stuff looks ok, its the latter part that looks like removing everything.. (quickly viewing)
20:23 partner the part where it says to rm -rf Path_to_brick/* - don't do that, read the note below
20:24 partner thought hmm
20:24 elico joined #gluster
20:25 partner maybe you better wait a bit before acting, the experts are waking up, the instructions look a bit weird to me
20:26 partner "If you need to maintain your volume, skip the deletion part and try to adapt the next part to your needs."
20:26 ikke- Yeah, thats what I thought at first aswell, but when I tested it in a lab enviroment it worked
20:27 ikke- Thanks alot for the talk, realy apreciate it. :)
20:29 partner i would need to test this to be confident enough to give exact instructions
20:29 partner stick around for an hour or couple and there should be pros around.. :o
20:31 _dist joined #gluster
20:31 partner as for dns, i'm not worried about. if it breaks then many other things will break, i don't definately want to maintain loads of hosts file, even if and when i have configuration management in place, that's just umm wrong
20:35 ikke- Ill look in to that, at the moment there is a domain change in play aswell that also complicates things, so the "easy way out" is to get it up and rolling as smoothly and "pretty" as posible.
20:48 Maitre Does anyone know anything about configuring the NFS server built into gluster?
20:49 Maitre AFAIK you cannot just export a clustered mountpoint via regular NFS.
20:49 Maitre But the built-in server seems to have no configurability at al.
20:58 ikke- partner: When you said "tune manually" what did you mean by that? Im running on Centos7
21:05 deniszh joined #gluster
21:06 georgeh-LT2 joined #gluster
21:06 badone joined #gluster
21:07 deniszh joined #gluster
21:35 gildub joined #gluster
21:36 partysoda joined #gluster
21:40 fyxim_ ikke-: I've successfully done it with search and replace in /var/lib/glusterd
21:42 NigeyS joined #gluster
21:49 ikke- fyxim_: Thank's I'll look in to that.
22:01 ikke- Just out of curiosity, is there a plan to "fix" this issue? Or is the best practice to use hostname's? And if so, you'll still run in to the same problems..
22:07 MugginsM joined #gluster
22:38 calisto joined #gluster
23:01 n-st joined #gluster
23:15 Pupeno_ joined #gluster
23:15 gildub joined #gluster
23:58 chirino joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary