连接到副本集时"MongoError: no mongos proxy available"



我正在按照本教程(https://github.com/drginm/docker-boilerplates/tree/master/mongodb-replicaset)进行操作,以便获得三个实例的mongodb复制集以在docker-compose中工作。

以下是我到目前为止采取的步骤:

1) 我已将setupmongo-rs0-1文件夹复制到我的根目录中。

2)我已经将三个mongo实例和安装实例添加到我的docker-compose文件中。它现在看起来像这样:

version: '3'
services:
mongo-rs0-1:
image: "mongo-start"
build: ./mongo-rs0-1
ports:
- "27017:27017"
volumes:
- ./mongo-rs0-1/data:/data/db
networks:
- app-network
depends_on:
- "mongo-rs0-2"
- "mongo-rs0-3"
mongo-rs0-2:
image: "mongo"
command: --replSet rs0 --smallfiles --oplogSize 128
networks:
- app-network
ports:
- "27018:27017"
volumes:
- ./mongo-rs0-2/data:/data/db
mongo-rs0-3:
image: "mongo"
command: --replSet rs0 --smallfiles --oplogSize 128
networks:
- app-network
ports:
- "27019:27017"
volumes:
- ./mongo-rs0-3/data:/data/db
setup-rs:
image: "setup-rs"
build: ./setup
networks:
- app-network
depends_on:
- "mongo-rs0-1"
nodejs:
build:
context: .
dockerfile: Dockerfile
ports:
- "8000:8000"
container_name: nodejs
restart: unless-stopped
networks:
- app-network
depends_on:
- setup-rs
nextjs:
build:
context: ../.
dockerfile: Dockerfile
ports:
- "3000:3000"
container_name: nextjs
restart: unless-stopped
networks:
- app-network
depends_on:
- nodejs
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./picFolder:/picFolder
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
depends_on:
- nodejs
- nextjs
- setup-rs
networks:
- app-network
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /
o: bind
networks:
app-network:
driver: bridge  

3)这不需要修改我的nginx.conf文件,但我在这里添加了它以节省开支:

server {
listen 80;
listen [::]:80;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com localhost;
location /socket.io {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_pass http://nodejs:8000/socket.io/;
}
location /back { 
proxy_connect_timeout 75s;
proxy_read_timeout 75s;
proxy_send_timeout 75s;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_pass http://nodejs:8000/back/;
}
location /staticBack{
alias /picFolder;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
location / {
proxy_pass http://nextjs:3000;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}

4)最后,我将连接字符串更改为'mongodb://mongo-rs0-1,mongo-rs0-2,mongo-rs0-3/test',如下所示(https://github.com/drginm/docker-boilerplates/blob/master/mongodb-replicaset/web-site/database.js)

这一切似乎都是正确的,但是我的nodejs在尝试连接到猫鼬时会抛出以下错误

mongoose.connect("mongodb://mongo-rs0-1,mongo-rs0-2,mongo-rs0-3/test");

MongoDB connection error: { MongoError: no mongos proxy available
at Timeout.<anonymous> (/var/www/back/node_modules/mongodb-core/lib/topologies/mongos.js:757:28)
at ontimeout (timers.js:498:11)
at tryOnTimeout (timers.js:323:5)
at Timer.listOnTimeout (timers.js:290:5) name: 'MongoError', [Symbol(mongoErrorContextSymbol)]: {} }

有人看到发生了什么吗?我尝试搜索此问题,但看起来通用答案是 mongodb 只是不知何故没有"看到"实例。我有点迷茫,任何帮助将不胜感激。

编辑:

经过一番挖掘,我找到了这个SO(猫鼬与副本集的连接),发现mongo.conf文件(https://github.com/drginm/docker-boilerplates/blob/master/mongodb-replicaset/mongo-rs0-1/mongo.conf)似乎说副本集名称是rs0。所以现在我连接:

mongoose.connect("mongodb://mongo-rs0-1,mongo-rs0-2,mongo-rs0-3/test?replicaSet=rs0");

但是,我仍然收到以下错误(至少不同!

MongoDB connection error: { MongoError: no primary found in replicaset or invalid replica set name
at /var/www/back/node_modules/mongodb-core/lib/topologies/replset.js:636:11
at Server.<anonymous> (/var/www/back/node_modules/mongodb-core/lib/topologies/replset.js:357:9)
at Object.onceWrapper (events.js:315:30)
at emitOne (events.js:116:13)
at Server.emit (events.js:211:7)
at /var/www/back/node_modules/mongodb-core/lib/topologies/server.js:508:16
at /var/www/back/node_modules/mongodb-core/lib/connection/pool.js:532:18
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickCallback (internal/process/next_tick.js:181:9) name: 'MongoError', [Symbol(mongoErrorContextSymbol)]: {} }

我很确定副本集名称现在是正确的,但我想也许我必须指定哪个是主要的?然而,这个SO(https://dba.stackexchange.com/questions/136621/how-to-set-a-mongodb-node-to-return-as-the-primary-of-a-replication-set)似乎暗示这应该自动发生。任何帮助仍然不胜感激。

编辑编辑:

进一步的研究发现了这个SO帖子(猫鼬与副本集的连接)。鉴于较新的v5 mongodb,以下选项连接现在似乎是最好的选择:

var options = {
native_parser: true,
auto_reconnect: false,
poolSize: 10,
connectWithNoPrimary: true,
sslValidate: false,
socketOptions: {
keepAlive: 1000,
connectTimeoutMS: 30000
}
};
mongoose.connect("mongodb://mongo-rs0-1:27017,mongo-rs0-2:27017,mongo-rs0-3:27017/test?replicaSet=rs0", options);

connectWithNoPrimary: true似乎特别重要,因为如果nodejs与从Docker启动的mongo服务存在竞争条件,它们可能还没有选择主节点。

但是,我仍然收到以下错误(再次,微妙的不同):

MongoDB connection error: { MongoError: no secondary found in replicaset or invalid replica set name
at /var/www/back/node_modules/mongodb-core/lib/topologies/replset.js:649:11
at Server.<anonymous> (/var/www/back/node_modules/mongodb-core/lib/topologies/replset.js:357:9)
at Object.onceWrapper (events.js:315:30)
at emitOne (events.js:116:13)
at Server.emit (events.js:211:7)
at /var/www/back/node_modules/mongodb-core/lib/topologies/server.js:508:16
at /var/www/back/node_modules/mongodb-core/lib/connection/pool.js:532:18
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickCallback (internal/process/next_tick.js:181:9) name: 'MongoError', [Symbol(mongoErrorContextSymbol)]: {} }

所以现在它找不到辅助副本集。添加connectwithnosecondary不会执行任何操作并导致相同的错误 - 我认为这不是一个有效的选项。仍然卡住,任何帮助将不胜感激。

编辑:

我已经将我的 nodejs 连接函数更改为存在于错误时的回归回调中。我希望这只会继续尝试连接,直到 docker-compose 中的任何潜在竞争条件得到解决并且我可以成功连接。但是,我一直收到上述错误MongoError: no secondary found in replicaset or invalid replica set name.所以我不再认为问题是连接的竞争条件 - 至少这似乎不是当前的错误。

var options = {
native_parser: true,
auto_reconnect: false,
poolSize: 10,
connectWithNoPrimary: true,
sslValidate: false
};
// mongoose.connect("mongodb://mongo-rs0-1,mongo-rs0-2,mongo-rs0-3/?replicaSet=rs0", {  useNewUrlParser: true, connectWithNoPrimary: true });
const connectFunc = () => {
mongoose.connect("mongodb://mongo-rs0-1:27017,mongo-rs0-2:27017,mongo-rs0-3:27017/test?replicaSet=rs0", options);
mongoose.Promise = global.Promise;
var db = mongoose.connection;
db.on('error', (error)=>{
console.log('MongoDB connection error:', error)
console.log('now calling connectFunc() again');
connectFunc()
});
db.once('open', function() {
// we're connected!
console.log('connected to mongoose db')
});
}
connectFunc()

编辑 x4:

这是另一个示例教程,我遇到了相同的错误:https://gist.github.com/patientplatypus/e48e6efdcc9f0f1aa551cc8342d0f2f3。我现在半信半疑地认为这可能是自动/猫鼬的错误,所以我打开了一个错误报告,可以在这里查看:https://github.com/Automattic/mongoose/issues/7705。这可能与2016年提出的这个长期存在的问题有关:https://github.com/Automattic/mongoose/issues/4596。

编辑 x5:

下面是其中一个 mongo 容器的日志输出:

patientplatypus:~/Documents/patientplatypus.com/forum/back:19:47:11$docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS                      NAMES
d46cfb5e1927        nginx:mainline-alpine   "nginx -g 'daemon of…"   3 minutes ago       Up 3 minutes        0.0.0.0:80->80/tcp         webserver
6798fe1f6093        back_nextjs             "npm start"              3 minutes ago       Up 3 minutes        0.0.0.0:3000->3000/tcp     nextjs
ab6888f703c7        back_nodejs             "/docker-entrypoint.…"   3 minutes ago       Up 3 minutes        0.0.0.0:8000->8000/tcp     nodejs
48131a82b34e        mongo-start             "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:27017->27017/tcp   mongo1
312772b1b0f1        mongo                   "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:27019->27017/tcp   mongo3
9fe9a16eb20e        mongo                   "docker-entrypoint.s…"   3 minutes ago       Up 3 minutes        0.0.0.0:27018->27017/tcp   mongo2
patientplatypus:~/Documents/patientplatypus.com/forum/back:19:48:55$docker logs 9fe9a16eb20e
2019-04-12T00:45:29.689+0000 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-04-12T00:45:29.727+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=9fe9a16eb20e
2019-04-12T00:45:29.728+0000 I CONTROL  [initandlisten] db version v4.0.8
2019-04-12T00:45:29.728+0000 I CONTROL  [initandlisten] git version: 9b00696ed75f65e1ebc8d635593bed79b290cfbb
2019-04-12T00:45:29.728+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2019-04-12T00:45:29.728+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-04-12T00:45:29.728+0000 I CONTROL  [initandlisten] modules: none
2019-04-12T00:45:29.729+0000 I CONTROL  [initandlisten] build environment:
2019-04-12T00:45:29.729+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2019-04-12T00:45:29.729+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-04-12T00:45:29.729+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-04-12T00:45:29.729+0000 I CONTROL  [initandlisten] options: { net: { bindIpAll: true }, replication: { oplogSizeMB: 128, replSet: "rs" }, storage: { mmapv1: { smallFiles: true } } }
2019-04-12T00:45:29.734+0000 W STORAGE  [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
2019-04-12T00:45:29.738+0000 I STORAGE  [initandlisten] Detected data files in /data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-04-12T00:45:29.741+0000 W STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
2019-04-12T00:45:29.742+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1461M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2019-04-12T00:45:43.165+0000 I STORAGE  [initandlisten] WiredTiger message [1555029943:165420][1:0x7f7051ca9a40], txn-recover: Main recovery loop: starting at 7/4608 to 8/256
2019-04-12T00:45:43.214+0000 I STORAGE  [initandlisten] WiredTiger message [1555029943:214706][1:0x7f7051ca9a40], txn-recover: Recovering log 7 through 8
2019-04-12T00:45:43.787+0000 I STORAGE  [initandlisten] WiredTiger message [1555029943:787329][1:0x7f7051ca9a40], txn-recover: Recovering log 8 through 8
2019-04-12T00:45:43.849+0000 I STORAGE  [initandlisten] WiredTiger message [1555029943:849811][1:0x7f7051ca9a40], txn-recover: Set global recovery timestamp: 0
2019-04-12T00:45:43.892+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-04-12T00:45:43.972+0000 W STORAGE  [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2019-04-12T00:45:43.972+0000 I CONTROL  [initandlisten] 
2019-04-12T00:45:43.972+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-04-12T00:45:43.972+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-04-12T00:45:43.973+0000 I CONTROL  [initandlisten] 
2019-04-12T00:45:44.035+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-04-12T00:45:44.054+0000 I REPL     [initandlisten] Did not find local voted for document at startup.
2019-04-12T00:45:44.064+0000 I REPL     [initandlisten] Rollback ID is 1
2019-04-12T00:45:44.064+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
2019-04-12T00:45:44.065+0000 I CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured
2019-04-12T00:45:44.065+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
2019-04-12T00:45:44.069+0000 I CONTROL  [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: config.system.sessions does not exist
2019-04-12T00:45:45.080+0000 I FTDC     [ftdc] Unclean full-time diagnostic data capture shutdown detected, found interim file, some metrics may have been lost. OK

特别是这条线

Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset

看起来令人担忧,我必须继续调查它。如果有人看到任何东西,请告诉我。

编辑x6

所以我 docker exec'd to one mongo 副本并输出 rs.status()。 我使用了docker exec -it mongo1 mongo然后rs.status()并得到了以下输出:

{
"operationTime" : Timestamp(0, 0),
"ok" : 0,
"errmsg" : "no replset config has been received",
"code" : 94,
"codeName" : "NotYetInitialized",
"$clusterTime" : {
"clusterTime" : Timestamp(0, 0),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

这似乎与上面的 mongodb 副本日志中的错误非常相似 (2019-04-12T00:45:44.064+0000 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset)。有谁知道它认为缺少什么?

您需要初始化副本集才能访问它。否则,应用程序将无法连接到本地实例。

理想情况下,您需要在副本集配置(setup-rs步骤)和下一个步骤之间添加一些时间,因为副本集配置可能需要比应用程序启动时间更长的时间。

或者修复脚本,如果脚本本身有问题。

您可能正在使用副本集,因此应调整连接字符串。 尝试在连接字符串的末尾添加参数replicaSet=rs0所以对于EX。

mongodb://<usr>:<pass>@<host1>:<port1>,<host2>:<port2>/<database_name>?replicaSet=rs0<&otherParams>

我也有类似的问题。我经常收到"MongoError:没有可用的mongos代理"错误。最后我设法以这种方式解决了它:

// The mongoose connection options
let options = {
ssl: true,
sslValidate: true,
poolSize: 1,
reconnectTries: 1,
useNewUrlParser: true,
dbName: 'myDb' // Specify the database here
}

我的连接字符串如下所示:

mongodb://<usr>:<pass>@host1.com:12345,host2.com:12346/adminDb?authSource=admin&ssl=true

我将默认的管理员数据库保留在连接字符串中,而是在选项中定义了要连接到的数据库。这解决了问题。

尝试使用Node.js 0.10/mongodb 2.2.36连接到副本集时出现此错误。该错误是由需要TLS 1.2的代理引起的。

要解决此问题,请将Node.js运行时升级到0.12或更高版本。

最新更新