将响应发送回javascript时从python中删除unicode字符

Remove unicode character from python when sending response back to javascript

1
2
3
4
5
6
7
8
9
10
11
12
elif self.path =="/recQuery":
  content_length = int(self.headers.getheader('content-length'))
cont_Length = content_length
print"Query Received"
body = self.rfile.read(content_length)
keywords = body.replace("\","")
result = json.loads(keywords)
query = result['query']

r = requests.get('http://example.com') // This returns the JSON
print r.json()
self.wfile.write(r.json()) // Send response back to the javascript

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
{
  u 'debug': [u 'time to fit model 0.02 s', u 'time to generate suggestions 0.06 s', u 'time to search documents 0.70 s', u 'time to misc operations 0.02 s'], u 'articles': [{
    u 'is-saved': False,
      u 'title': u 'Reinforcement and learning',
      u 'abstract': u 'Evidence has been accumulating to support the process of reinforcement as a potential mechanism in speciation. In many species, mate choice decisionsare influenced by cultural factors, including learned mating preferences (sexual imprinting) or learned mate attraction signals (e.g., bird song). Ithas been postulated that learning can have a strong impact on the likelihood of speciation and perhaps on the process of reinforcement, but no modelshave explicitly considered learning in a reinforcement context. We review the evidence that suggests that learning may be involved in speciation and reinforcement, and present a model of reinforcement via learned preferences. We show that not only can reinforcement occur when preferences are learned by imprinting, but that such preferences can maintain species differenceseasily in comparison with both autosomal and sex-linked genetically inherited preferences. We highlight the need for more explicit study of the connection between the behavioral process of learning and the evolutionary process of reinforcement in natural systems.',
      u 'date': u '2009-01-01T00:00:00',
      u 'publication-forum': u 'EVOLUTIONARY ECOLOGY',
      u 'publication-forum-type': u 'article',
      u 'authors': u 'M R Servedio, S A Saether, G P Saetre',
      u 'keywords': u 'imprinting, learning, preferences, model, reinforcement, speciation',
      u 'id': u '572749dd12a0854514c1f764'
  }, {
    u 'is-saved': False,
      u 'title': u 'Relational reinforcement learning',
      u 'abstract': u 'Then, relational reinforcement learning is presented as a combination of reinforcement learning with relational learning. Its advantages - such as the possibility of using structural representations, making abstraction from specific goals pursued and exploiting the results of previous learning phases - are discussed.',
      u 'date': u '2001-01-01T00:00:00',
      u 'publication-forum': u 'MULTI-AGENT SYSTEMS AND APPLICATIONS',
      u 'publication-forum-type': u 'article',
      u 'authors': u 'K Driessens',
      u 'keywords': u 'reinforcement, learning, reinforcement learning',
      u 'id': u '572749dd12a0854514c1f765'
  }, {
    u 'is-saved': False,
      u 'title': u 'Meta-learning in Reinforcement Learning',
      u 'abstract': u 'Meta-parameters in reinforcement learning should be tuned to the environmental dynamics and the animal performance. Here, we propose a biologically plausible meta-reinforcement learning algorithm for tuning these meta-parameters in a dynamic, adaptive manner. We tested our algorithm in both a simulation of a Markov decision task and in a non-linear control task. Our results show that the algorithm robustly finds appropriate meta-parameter values, and controls the meta-parameter time course, in both static and dynamic environments. We suggest that the phasic and tonic components of dopamine neuron firing can encode the signal required for meta-learning of reinforcement learning. (C) 2002 Elsevier Science Ltd. All rights reserved.',
      u 'date': u '2003-01-01T00:00:00',
      u 'publication-forum': u 'NEURAL NETWORKS',
      u 'publication-forum-type': u 'article',
      u 'authors': u 'N Schweighofer, K Doya',
      u 'keywords': u 'reinforcement learning, dopamine, dynamic environment, meta-learning, meta-parameters, neuromodulation, td error, reinforcement, learning',
      u 'id': u '572749dd12a0854514c1f766'
  }, {
    u 'is-saved': False,
      u 'title': u 'Evolutionary adaptive-critic methods for reinforcement learning',
      u 'abstract': u 'In this paper, a novel hybrid learning method is proposed for reinforcement learning problems with continuous state and action spaces. The reinforcement learning problems are modeled as Markov decision processes (MDPs) and the hybrid learning method combines evolutionary algorithms with gradient-based Adaptive Heuristic Critic (AHC) algorithms to approximate the optimal policy of MDPs. The suggested method takes the advantages of evolutionary learning and gradient-based reinforcement learning to solve reinforcement learning problems. Simulation results on the learning control of an acrobot illustrate the efficiency of the presented method.',
      u 'date': u '2002-01-01T00:00:00',
      u 'publication-forum': u"CEC'02: PROCEEDINGS OF THE 2002 CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS1 AND 2",
      u 'publication-forum-type': u 'article',
      u 'authors': u 'X Xu, H G He, D W Hu',
      u 'keywords': u 'markov decision process, reinforcement, learning, model, reinforcement learning, robotics',
      u 'id': u '572749dd12a0854514c1f767'
  }, {
    u 'is-saved': False,
      u 'title': u 'Stable Fitted Reinforcement Learning',
      u 'url': u 'http://books.nips.cc/papers/files/nips08/1052.pdf',
      u 'abstract': u 'We describe the reinforcement learning problem, motivate algorithms which seek an approximation to the Q function, and present new convergence results for two such algorithms.',
      u 'date': u '1995-01-01T00:00:00',
      u 'publication-forum': u 'NIPS 1995',
      u 'authors': u 'G. J. GORDON',
      u 'keywords': u 'reinforcement, learning, reinforcement learning',
      u 'id': u '572749dd12a0854514c1f768'
  }, {
    u 'is-saved': False,
      u 'title': u 'Feudal Reinforcement Learning',
      u 'url': u 'http://books.nips.cc/papers/files/nips05/0271.pdf',
      u 'abstract': u"One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-learning managerial hierarchy in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Sub-managers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command. We illustrate the system using a simple maze task.. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-learning and builds a more comprehensive map.",
      u 'date': u '1992-01-01T00:00:00',
      u 'publication-forum': u 'NIPS 1992',
      u 'authors': u 'Peter Dayan, Geoffrey E. Hinton',
      u 'keywords': u 'reinforcement, learning, reinforcement learning',
      u 'id': u '572749dd12a0854514c1f769'
  }, {
    u 'is-saved': False,
      u 'title': u 'Reinforcement learning in the multi-robot domain',
      u 'abstract': u 'This paper describes a formulation of reinforcement learning that enables learning in noisy, dynamic environments such as in the complex concurrent multi-robot learning domain. The methodology involves minimizing the learning space through the use of behaviors and conditions, and dealing with the credit assignment problem through shaped reinforcement in the form of heterogeneous reinforcement functions and progress estimators. We experimentally validate the approach on a group of four mobile robots learning a foraging task.',
      u 'date': u '1997-01-01T00:00:00',
      u 'publication-forum': u 'AUTONOMOUS ROBOTS',
      u 'publication-forum-type': u 'article',
      u 'authors': u 'M J Mataric',
      u 'keywords': u 'robotics, robot learning, group behavior, multi-agent systems, reinforcement learning, dynamic environment, reinforcement, learning',
      u 'id': u '572749dd12a0854514c1f76a'
  }, {
    u 'is-saved': False,
      u 'title': u 'A reinforcement learning approach to online clustering',
      u 'abstract': u 'A general technique is proposed for embedding online clustering algorithmsbased on competitive learning in a reinforcement learning framework. The basic idea is that the clustering system can be viewed as a reinforcement learning system that learns through reinforcements to follow the clustering strategy we wish to implement. In this sense, the reinforcement guided competitive learning (RC;CL) algorithm is proposed that constitutes a reinforcement-based adaptation of learning vector quantization (LVQ) with enhanced clustering capabilities. In addition, we suggest extensions of RGCL and LVQ that are characterized by the property of sustained exploration and significantly improve the performance of those algorithms, as indicated by experimental tests on well-known data sets.',
      u 'date': u '1999-01-01T00:00:00',
      u 'publication-forum': u 'NEURAL COMPUTATION',
      u 'publication-forum-type': u 'article',
      u 'authors': u 'A Likas',
      u 'keywords': u 'reinforcement, learning, reinforcement learning',
      u 'id': u '572749dd12a0854514c1f76b'
  }, {
    u 'is-saved': False,
      u 'title': u 'Kernel-Based Reinforcement Learning',
      u 'abstract': u 'We consider the problem of approximating the cost-to-go functions in reinforcement learning. By mapping the state implicitly into a feature space, weperform a simple algorithm in the feature space, which corresponds to a complex algorithm in the original state space. Two kernel-based reinforcementlearning algorithms, the e-insensitive kernel based reinforcement learning(epsilon-KRL) and the least squares kernel based reinforcement learning (LS-KRL) are proposed. An example shows that the proposed methods can deal effectively with the reinforcement learning problem without having to exploremany states.',
      u 'date': u '2006-01-01T00:00:00',
      u 'publication-forum': u 'INTELLIGENT COMPUTING, PART I',
      u 'publication-forum-type': u 'article',
      u 'authors': u 'G H Hu, Y Q Qiu, L M Xiang',
      u 'keywords': u 'reinforcement, learning, reinforcement learning',
      u 'id': u '572749dd12a0854514c1f76c'
  }, {
    u 'is-saved': False,
      u 'title': u 'Reinforcement Learning for Adaptive Routing',
      u 'url': u 'http://arxiv.org/abs/cs/0703138',
      u 'abstract': u 'Reinforcement learning means learning a policy--a mapping of observations into actions--based on feedback from the environment. The learning can be viewed as browsing a set of policies while evaluating them by trial through interaction with the environment. We present an application of gradient ascent algorithm for reinforcement learning to a complex domain of packet routing in network communication and compare the performance of this algorithm to other routing methods on a benchmark problem.',
      u 'date': u '2007-01-01T00:00:00',
      u 'publication-forum': u 'arXiv.org',
      u 'authors': u 'Leonid Peshkin, Virginia Savova',
      u 'keywords': u 'reinforcement, learning, reinforcement learning',
      u 'id': u '572749dd12a0854514c1f76d'
  }], u 'keywords_local': {
    u 'dynamic programming': {
        u 'distance': 0.6078647488472677,
          u 'angle': 150.8840432613797
      },
      u 'on-line learning': {
        u 'distance': 0.7752212048381117,
          u 'angle': 51.8728440344057
      },
      u 'reinforcement learning': {
        u 'distance': 1.0,
          u 'angle': 132.93204012494624
      },
      u 'reinforcement': {
        u 'distance': 0.8544341892190607,
          u 'angle': 94.75966624638419
      },
      u 'neural dynamic programming': {
        u 'distance': 0.8898672614396893,
          u 'angle': 103.76832781320546
      },
      u 'genetic algorithms': {
        u 'distance': 0.5448835956783193,
          u 'angle': 0.0
      },
      u 'learning': {
        u 'distance': 0.8544341892190607,
          u 'angle': 180.0
      },
      u 'model': {
        u 'distance': 0.6424412547642948,
          u 'angle': 114.45637264648838
      },
      u 'navigation': {
        u 'distance': 0.6125205579210247,
          u 'angle': 88.55814464422271
      },
      u 'fuzzy logic': {
        u 'distance': 0.6204073568578674,
          u 'angle': 180.0
      }
  }, u 'keywords_semi_local': {
    u 'latent learning': {
        u 'distance': 0.0,
          u 'angle': 132.93204012494624
      },
      u 'neural networks': {
        u 'distance': 1.0,
          u 'angle': 114.45637264648838
      },
      u 'meta-learning': {
        u 'distance': 0.5606272601392779,
          u 'angle': 121.07077066747541
      },
      u 'neuromodulation': {
        u 'distance': 0.5606272601392779,
          u 'angle': 121.07077066747541
      },
      u 'imprinting': {
        u 'distance': 0.3549922259116784,
          u 'angle': 51.8728440344057
      },
      u 'rough sets': {
        u 'distance': 0.7556870841637823,
          u 'angle': 0.0
      },
      u 'speciation': {
        u 'distance': 0.3549922259116784,
          u 'angle': 51.8728440344057
      },
      u 'robot learning': {
        u 'distance': 0.5732466205043193,
          u 'angle': 75.01844366338882
      },
      u 'multi-agent learning': {
        u 'distance': 0.3539033107593776,
          u 'angle': 165.77500580957724
      },
      u 'supply chain management': {
        u 'distance': 0.7412680693648454,
          u 'angle': 180.0
      },
      u 'td error': {
        u 'distance': 0.5606272601392779,
          u 'angle': 121.07077066747541
      },
      u 'robocup': {
        u 'distance': 0.8025792169619675,
          u 'angle': 88.55814464422271
      },
      u 'kernel-based learning': {
        u 'distance': 0.7404347021238603,
          u 'angle': 41.29183304013004
      },
      u 'swarm': {
        u 'distance': 0.7556870841637823,
          u 'angle': 0.0
      },
      u 'risk-sensitive control': {
        u 'distance': 0.8340971241377915,
          u 'angle': 94.75966624638419
      },
      u 'adaptive control': {
        u 'distance': 0.34596782799450027,
          u 'angle': 125.34609947124422
      },
      u 'group behavior': {
        u 'distance': 0.5732466205043193,
          u 'angle': 75.01844366338882
      },
      u 'meta-parameters': {
        u 'distance': 0.5606272601392779,
          u 'angle': 121.07077066747541
      },
      u"bellman's equation": {
        u 'distance': 0.9584860393532658,
          u 'angle': 71.16343972789532
      },
      u 'dynamic environment': {
        u 'distance': 0.7014728291381438,
          u 'angle': 103.76832781320546
      },
      u 'neural control': {
        u 'distance': 0.8025792169619675,
          u 'angle': 88.55814464422271
      },
      u 'transfer learning': {
        u 'distance': 0.6876390048950136,
          u 'angle': 150.8840432613797
      },
      u 'multi-agent systems': {
        u 'distance': 0.5732466205043193,
          u 'angle': 75.01844366338882
      },
      u 'monte carlo method': {
        u 'distance': 0.7556870841637823,
          u 'angle': 0.0
      },
      u 'learning mobile robots': {
        u 'distance': 0.8025792169619675,
          u 'angle': 88.55814464422271
      },
      u 'ethology': {
        u 'distance': 0.7556870841637823,
          u 'angle': 0.0
      },
      u 'parallel agents': {
        u 'distance': 0.3539033107593776,
          u 'angle': 165.7750058095772
      },
      u 'multi-task learning': {
        u 'distance': 0.6876390048950136,
          u 'angle': 150.8840432613797
      },
      u 'autonomous learning robots': {
        u 'distance': 0.8025792169619675,
          u 'angle': 88.55814464422271
      },
      u 'optimal control': {
        u 'distance': 0.5327106780845866,
          u 'angle': 37.59122818518838
      }
  }, u 'inputs': [
    [u 'learning', 1.0, 0.8544341892190607, 1.1491961201808072, -1],
    [u 'reinforcement learning', 0.978719279361022, 1.0, 1.1256696437503226, -1],
    [u 'reinforcement', 1.0, 0.8544341892190607, 1.1491961201808072, -1]
  ]
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function notifyServerForQuery()
{
    if(search_query !="")
    {
        var http = new XMLHttpRequest();
        var url = SERVER +"/recQuery";
        var params = JSON.stringify({query: search_query});
        http.onreadystatechange = function() {
        if (http.readyState == 4 && http.status == 200) {
                    console.log(http.responseText);
        }
    };
        http.open("POST", url, true);
        http.setRequestHeader("Content-type","application/x-www-form-urlencoded");
        http.send(params);
    }
}

我面临的问题是,当我在后台发送响应到JavaScript的JSON的Unicode字符在蛛网膜下腔出血(SAH),这也是它发送回来。所以当我试图解析JSON是JavaScript的一方抛出的错误。

主要的事情我想达到的是在Unicode字符的删除或者在Python看到JSON和JavaScript或服务器端。"是什么东西做的欢迎。


您需要对输出进行编码。

如果我是你,我会使用python3,因为python2编码是一个头痛的问题。无论如何,我做了一个超级编码功能来帮助你:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
def encode_dict(dic, encoding='utf-8'):
    new_dict={}

    for key, value in dic.items():

        new_key=key.encode(encoding)

        if isinstance(value, list):
            new_dict[new_key]=[]
            for item in value:
                if isinstance(item, unicode):
                    new_dict[new_key].append(item.encode(encoding))

                elif isinstance(item, dict):

                    new_dict[new_key].append(decode_dict(item))

                else:
                    new_dict[new_key].append(item)

        elif isinstance(value, unicode):
            new_dict[new_key]=value.encode(encoding)

        elif isinstance(value, dict):
            new_dict[new_key]=decode_dict(value)

    return new_dict

你也知道:self.wfile.write(encode_dict(r.json()))