何首乌长什么样| 你在说什么用英语怎么说| 矫正牙齿挂什么科| qs认证是什么意思| 辩证是什么意思| lof是什么意思| 木有什么意思| 黑死病是什么| 做梦下大雨是什么兆头| 流星雨是什么意思| 女人吃什么排卵最快| 龟兔赛跑的故事告诉我们什么道理| 浑水摸鱼什么意思| 为什么叫新四军| 什么是滑档| 1987年出生属什么生肖| 农历是什么生肖| 甲状腺适合吃什么食物| 桃子是什么季节的水果| 芦笋不能和什么一起吃| 芒果不能和什么一起吃| 诸葛亮号什么| 鱼油什么人不能吃| 积劳成疾的疾什么意思| 扁平疣是什么原因引起的| 纵隔肿瘤是什么病| 起什么转什么成语| 什么是忧郁症| 开眼镜店需要什么设备| 淡竹叶有什么功效| 血氧饱和度低于90有什么危害| 胰腺炎能吃什么| 什么叫染色体| 寒包火感冒吃什么药| 媛交是什么意思| 弱点是什么意思| 叶倩文属什么生肖| 壁咚是什么意思| 灵魂契合是什么意思| 脑白质脱髓鞘改变是什么意思| 丰富的近义词和反义词是什么| 乳腺增生挂什么科| 为什么会乳糖不耐受| 茧子是什么意思| 肚子疼挂什么科| 湿气重是什么意思| AC是胎儿的什么意思| 人为什么需要诗歌| henry是什么意思| 女生打呼噜是什么原因| 乙肝前s1抗原阳性是什么意思| 庞统和诸葛亮什么关系| 看见壁虎是什么兆头| 狗狗为什么会咬人| 县长是什么级别| 阴道口痛什么原因| 保险属于什么行业| zfc是什么牌子| 生殖器疱疹是什么| 男性乳头疼是什么原因| 心动过缓吃什么药| 什么是工作| 肿瘤手术后吃什么好| 寿司醋可以用什么代替| 人生没有什么不可放下| 佛牌是什么| 什么原因造成高血压| 脉搏跳得快是什么原因| 10月份是什么星座| 瑞士用什么货币| 庚午日五行属什么| 怀孕一个月有什么反应| 因材施教什么意思| 梦见种菜是什么意思| eu是什么元素| 洗手指征是什么| 不到长城非好汉的下一句是什么| 土命适合什么颜色| 男命正官代表什么| 表哥的儿子叫我什么| 48岁属什么生肖| 羊肉炖什么好吃| 脚上为什么会长鸡眼| 脸痒是什么原因| 认真是什么意思| 经期头疼是什么原因怎么办| 肾上腺素有什么用| 喝中药为什么会拉肚子| 黑色加什么颜色是棕色| 过敏是什么原因引起的| 哇哦什么意思| 一什么清风| 阴茎不硬是什么原因| 刷屏是什么意思| 精神病人最怕什么刺激| 办理护照需要什么| 贫血要做什么检查| 喝柠檬水有什么好处| 睑腺炎是什么原因造成| 九月28号是什么星座| 喝山楂水有什么好处和坏处| 黄牛什么意思| 1972属什么生肖| 满文军现在在干什么| 户口是什么意思| 为什么山东人个子高| 女人的排卵期是什么时候| 伴手礼是什么| 一九八八年属什么生肖| 孕妇零食可以吃什么| 什么血型招蚊子| 反颌是什么意思| 药品gmp是什么意思| 十二月十号是什么星座| 养流浪猫需要注意什么| 蒂芙尼蓝是什么颜色| usc是什么意思| 过期的维生素e有什么用途| 什么是脑白质病| 双侧卵巢多囊样改变是什么意思| 超声波是什么| 黄体酮不足吃什么| 头疼发热是什么原因| 夏天是什么样的| 跖疣用什么药膏能治好| 血沉高是什么原因引起的| 存是什么生肖| 铁蛋白高挂什么科| 壮的偏旁叫什么名字| 中间细胞百分比偏高是什么意思| 经变是什么意思| 十月份生日是什么星座| 什么是繁体字| 唯我独尊是什么意思| 减肥吃什么药效果最好| 诺丽果有什么功效| 隐翅虫皮炎用什么药| 苯甲酸钠是什么| 11.22是什么星座| 大黄米是什么米| 喉结不明显的男生是什么原因| 强烈的什么| 马叉虫是什么意思| 3点是什么时辰| 石斛花有什么功效| 青口是什么东西| 犒劳是什么意思| 脸上长肉疙瘩是什么原因| 脉濡是什么意思| 绵密是什么意思| 梦见别人送钱给我是什么意思| 渡人是什么意思| 白事的礼金叫什么| 喉咙发炎吃什么食物好| 肚脐眼痒是什么原因| 辟谷什么意思| 眩晕症是什么原因引起| 再说吧是什么意思| 甾体是什么意思| 血清载脂蛋白b偏高是什么意思| 指数是什么| 诗五行属性是什么| 牙疼吃什么菜降火最快| 保姆是什么意思| 火是什么颜色| 诸葛亮字什么| 痛风检查什么项目| 蝉蜕是什么| 压力等于什么| 普洱茶有什么功效与作用| 经常流鼻血是什么病的前兆| 长春有什么好吃的| 天子是什么生肖| 黄疸挂什么科| 发际线是什么| 大便干结是什么原因| 青鱼又叫什么鱼| 肛门瘙痒看什么科| 氨纶是什么面料| 憋屎会造成什么后果| 梦见鳄鱼是什么预兆| 乳房疼痛吃什么消炎药| 灵芝与什么相克| 头皮痛是什么原因| 讨好的笑是什么笑| 心肌缺血吃什么中成药| 吃什么去湿气最好最快| 青春不散场什么意思| 十年是什么婚| jeep是什么意思| 11月29号什么星座| 肺痿是什么意思| 缠头是什么意思| 面基是什么意思| 倩字五行属什么| 干细胞是什么东西| 犯月是什么意思| 草字头加西读什么| 本命年为什么要穿红色| 延字五行属什么| 牒是什么意思| 女孩子为什么会痛经| 便秘吃什么可以调理| 后背痛什么原因| 豆腐和什么不能一起吃| 为什么身上痒一抓就起疙瘩| 形影不离是什么意思| 缠腰龙是什么病| 7月份适合种什么菜| 2008年出生的属什么| 梦见红枣树上结满红枣代表什么| 排卵期是什么时候开始算| 喝菊花水有什么好处| 手指关节肿大是什么原因| 猪蹄炖什么| 古代的面首是什么意思| 炼乳是什么东西| 慢工出细活什么意思| 鳞状上皮增生什么意思| 老人越来越瘦是什么原因| 前列腺增生吃什么药见效快| 吐槽什么意思| 勿忘心安是什么意思| 尊敬是什么意思| 身上出冷汗是什么原因| 二氧化硅是什么氧化物| 汕头有什么好玩的地方| 低血压是什么原因引起的| 禾加末念什么| 膝盖酸胀是什么原因| 什么情况下需要做活检| 甲烷是什么| 西洋参不适合什么人吃| 感冒有黄痰是什么原因| 妄想症是什么意思| 什么降糖药效果最好| 什么是慢性病| c罗全名叫什么| 为什么会得痛风| 寸脉弱是什么原因| 各的偏旁是什么| 香草是什么意思| 肠痉挛吃什么药| 覅是什么意思| 朋字五行属什么| 急性上呼吸道感染吃什么药| 秦朝灭亡后是什么朝代| 干咳嗽无痰是什么原因| 23333是什么意思| 房颤吃什么药最好| 牙齿痛吃什么药好| 脑白质病变吃什么药| 种植牙有什么风险和后遗症| 汉尼拔什么意思| 过度是什么意思| 神经递质是什么意思| 突然视力模糊是什么原因引起的| 梦见悬崖峭壁是什么意思| 心肌缺血吃什么药最好| 什么是有机蔬菜| 什么什么功高| 苏州机场叫什么| 恐龙是什么时候灭绝的| cm是什么意思| 百度

最新跑车全球资讯 追求速度极致

2.00 Machine Learning Paradigms

Machine learning is commonly separated into three main learning paradigms: supervised learning, unsupervised learning, and reinforcement learning. These paradigms differ in the tasks they can solve and in how the data is presented to the computer. Usually, the task and the data directly determine which paradigm should be used (and in most cases, it is supervised learning). In some cases though, there is a choice to make. Often, these paradigms can be used together in order to obtain better results. This chapter gives an overview of what these learning paradigms are and what they can be used for.

Supervised Learning

Supervised learning is the most common learning paradigm. In supervised learning, the computer learns from a set of input-output pairs, which are called labeled examples:

The goal of supervised learning is usually to train a predictive model from these pairs. A predictive model is a program that is able to guess the output value (a.k.a. label) for a new unseen input. In a nutshell, the computer learns to predict using examples of correct predictions. For example, let’s consider a dataset of animal characteristics (note that typical datasets are much larger):

Our goal is to predict the weight of an animal from its other characteristics, so we rewrite this dataset as a set of input-output pairs:

 

The input variables (here, age and sex) are generally called features, and the set of features representing an example is called a feature vector. From this dataset, we can learn a predictor in a supervised way using the function Predict:

 
 

Now we can use this predictor to guess the weight of a new animal:

 
 

This is an example of a regression task (see Chapter 4, Regression) because the output is numeric. Here is another supervised learning example where the input is text and the output is a categorical variable ("cat" or "dog"):

 
 

Again, we can use the resulting model to make a prediction:

 
 

Because the output is categorical, this is an example of a classification task (see Chapter 3, Classification). The image identification example from the first chapter is another example of classification since the data consists of labeled examples such as:

As we can see, supervised learning is separated into two phases: a learning phase during which a model is produced and a prediction phase during which the model is used. The learning phase is called the training phase because the model is trained to perform the task. The prediction phase is called the evaluation phase or inference phase because the output is inferred (i.e. deduced) from the input.

Regression and classification are the main tasks of supervised learning, but this paradigm goes beyond these tasks. For example, object detection is an application of supervised learning for which the output consists of multiple classes and their corresponding box positions:

Text translation and speech recognition, for which the output is text, are also tackled in a supervised way:

 
 
 
 

We could imagine all sorts of other output types. As long as the training data consists of a set of input-output pairs, it is a supervised learning task.

Most of the applications that we showed in the first chapter are learned in a supervised way. Currently, the majority of machine learning applications that are developed are using a supervised learning approach. One reason for that is that the main supervised tasks (classification and regression) are useful and well defined and can often be tackled using simple algorithms. Another reason is that many tools have been developed for this paradigm. The main downside of supervised learning, though, is that we need to have labeled data, which can be hard to obtain in some cases.

Unsupervised Learning

Unsupervised learning is the second most used learning paradigm. It is not used as much as supervised learning, but it unlocks different types of applications. In unsupervised learning, there are neither inputs nor outputs, the data is just a set of examples:

Unsupervised learning can be used for a diverse range of tasks. One of them is called clustering (see Chapter 6, Clustering), and its goal is to separate data examples into groups called clusters:

An application of clustering could be to automatically separate customers of a company to create better marketing campaigns. Clustering is also simply used as an exploration tool to obtain insights about the data and make informed decisions.

Another classic unsupervised task is called dimensionality reduction (see Chapter 7, Dimensionality Reduction). The goal of dimensionality reduction is to reduce the number of variables in a dataset while trying to preserve some properties of the data, such as distances between examples. Here is an example of a dataset of three variables reduced to two variables:

Dimensionality reduction can be used for a variety of tasks, such as compressing the data, learning with missing labels, creating search engines, or even creating recommendation systems. Dimensionality reduction can also be used as an exploration tool to visualize an entire dataset in a reduced space (see Chapter 7):

Anomaly detection (see Chapter 7, Dimensionality Reduction, and Chapter 8, Distribution Learning) is another task that can be tackled in an unsupervised way. Anomaly detection concerns the identification of examples that are anomalous, a.k.a. outliers. Here is an example of anomaly detection performed on a simple numeric dataset:

This task could be useful for detecting fraudulent credit card transactions, to clean a dataset, or to detect when something is going wrong in a manufacturing process.

Another classic unsupervised task is called missing imputation (see Chapter 7 and Chapter 8), and the goal is to fill in the missing values in a dataset:

This task is extremely useful because most datasets have missing values and many algorithms cannot handle them. In some cases, missing imputation techniques can also be used for predictive tasks, such as recommendation engines (see Chapter 7).

Finally, the most difficult unsupervised learning task is probably to learn how to generate examples that are similar to the training data. This task is called generative modeling (see Chapter 8) and can, for example, be used to learn how to generate new faces from many example faces. Here are such synthetic faces generated by a neural network from random noise:

Such generation techniques can also be used to enhance resolution, denoise, or impute missing values.

Unsupervised learning is a bit less used than supervised learning, mostly because the tasks it solves are less common and are harder to implement than predictive tasks. However, unsupervised learning can be applied to a more diverse set of tasks than supervised learning. Nowadays, unsupervised learning is a key element of many machine learning applications and is also used as a tool to explore data. Moreover, many researchers believe that unsupervised learning is how humans learn most of their knowledge and will, therefore, be the key to developing future artificially intelligent systems.

Reinforcement Learning

The third most classic learning paradigm is called reinforcement learning, which is a way for autonomous agents to learn. Reinforcement learning is fundamentally different from supervised and unsupervised learning in the sense that the data is not provided as a fixed set of examples. Rather, the data to learn from is obtained by interacting with an external system called the environment. The name “reinforcement learning” originates from behavioral psychology, but it could just as well be called “interactive learning.”

Reinforcement learning is often used to teach agents, such as robots, to learn a given task. The agent learns by taking actions in the environment and receiving observations from this environment:

Typically, the agent starts its learning process by acting randomly in the environment, and then the agent gradually learns from its experience to perform the task better using a sort of trial-and-error strategy. The learning is usually guided by a reward that is given to the agent depending on its performance. More precisely, the agent learns a policy that maximizes this reward. A policy is a model predicting which action to make given previous actions and observations.

Reinforcement learning can, for example, be used by a robot to learn how to walk in a simulated environment. Here is an snapshot from the classic Ant-v2 environment:

In this case, the actions are the torque values applied to each leg joint; the observations are leg angles, external forces, etc.; and the reward is the speed of the robot. Learning in such a simulated environment can then be used to help a real robot walk. Such transfer from simulation to reality has, for example, been used by OpenAI to teach a robot how to manipulate a Rubik’s Cube:

It is also possible for a real robot to learn without a simulated environment, but real robots are slow compared to simulated ones and current algorithms have a hard time learning fast enough. A mitigation strategy consists of learning to simulate the real environment, a field known as model-based reinforcement learning, which is under active research.

Reinforcement learning can also be used to teach computers to play games. Famous examples include AlphaGo, which can beat any human player at the board game Go, or AlphaStar, which can do the same for the video game StarCraft:

Both of these programs were developed using reinforcement learning by having the agent play against itself. Note that the reward in such problems is only given at the end of the game (either you win or lose), which makes it challenging to learn which actions were responsible for the outcome.

Another important application of reinforcement learning is in the field of control engineering. The goal here is to dynamically control the behavior of a system (an engine, a building, etc.) for it to behave optimally. The prototypical example is to control a pole standing on a cart by moving the cart left or right (a.k.a. inverse pendulum):

In general, classic control methods are used for such problems, but reinforcement learning is entering this field. For example, reinforcement learning has been used to control the cooling system (fan speed, water flow, etc.) of Google data centers in a more efficient way:

One issue when applying reinforcement learning directly in such a real-world system is that during the learning phase, the agent might perform actions that can break the system or pose safety issues.

Reinforcement learning is probably the most exciting paradigm since the agent is learning by interacting, like a living being. Active systems have the potential to learn better than passive ones because they can decide by themselves what to explore in order to improve. We can imagine all sorts of applications using this paradigm, from a farmer robot that learns to improve crop production, to a program that learns to trade stocks, to a chatbot that learns by having discussions with humans. Unfortunately, current algorithms need a large amount of data to be effective, which is why most reinforcement learning applications use virtual environments. Also, reinforcement learning problems are generally more complicated to handle than supervised and unsupervised ones. For these reasons, reinforcement learning is less used than other paradigms in practical applications. As research is progressing, it is likely that algorithms will need less data to operate and that simpler tools will be developed. Reinforcement learning might then become a dominant paradigm.

Other Learning Paradigms

Supervised, unsupervised, and reinforcement learning are the three core learning paradigms. Nevertheless, there are other ways to learn that depend on the specificities of the problem to solve. Here are a few of these other learning paradigms worth mentioning, most of which are variations or hybrids of the core paradigms.

Semi-supervised Learning

In semi-supervised learning, a part of the data is in the form of input-output pairs, like in supervised learning:

Another part of the data only contains inputs:

The goal is generally to learn a predictive model from both of these datasets. Semi-supervised learning is thus a supervised learning problem for which some training labels are missing.

Typically, the unlabeled dataset is much bigger than the labeled dataset. One way to take advantage of this kind of data is to use a mix of unsupervised and supervised methods. Another way is to use a self-training procedure during which we train a model on the labeled data, predict the missing labels, then train on the full dataset, predict the missing labels again, and so on. Such a self-training procedure was used to obtain a state-of-the-art image identification neural network in 2019:

 
 

This network was trained with (only) 1.2 million labeled images but also with 300 million unlabeled images.

Overall, semi-supervised learning is an attractive paradigm because labeling data is often expensive. However, obtaining good results with this paradigm is a bit of an art and requires more work than supervised learning. Because of these difficulties, most machine learning users tend to stick to pure supervised approaches (which means discarding examples that do not have labels).

Online Learning

Online learning is a way to learn iteratively from a stream of data. In its pure form, the model updates itself after each example given:

The model can also update itself using batches of examples. This kind of learning could be used by a bank needing to continuously update its fraud detection system by learning from the numerous transactions made every day.

Online learning is useful when the dataset is large and comes as a stream because it avoids having to retrain models from scratch. Also, we don’t necessarily need to store the training data in this setting. Online learning is also useful because it naturally gives more importance to more recent data than to older data (which is often less relevant). Another use of online learning is when the dataset is too large to fit into the fast memory of the computer and thus needs to be read in chunks, a procedure called out-of-core learning.

Online learning is not really a paradigm in itself since the underlying problem can be both supervised (labeled examples) or unsupervised (unlabeled examples); it is more of a learning constraint. Not every machine learning method can learn online. As a rule of thumb, every method that uses a continuous optimization procedure (such as neural networks) can be used in an online learning setting.

Active Learning

Active learning is a way to teach a predictive model by interacting with an on-demand source of information. At the beginning of an active learning procedure, the data only consists of inputs:

During the learning procedure, the student model can request some of these unknown outputs from a teacher (a.k.a. oracle). A teacher is a system able to predict (sometimes not perfectly) the output from a given input:

Most of the time, the teacher is a human, but it could also be a program, such as a numeric simulation.

Active learning can, for example, be used to create an image classifier when training images are not labeled. In this case, humans would play the role of the teachers and the computer would decide which images should be sent for annotation.

Since the teacher is generally slow to respond, the computer must decide which example is the most informative in order to learn as fast as possible. For example, it might be smart to ask the teacher about inputs that the model cannot predict confidently yet.

Active learning can be seen as a subset of reinforcement learning since the student is also an active agent. The difference is that the agent cannot alter the environment here. Such active systems have the potential to learn much faster than passive systems, and this might be a key to creating intelligent systems.

Transfer Learning

Transfer learning deals with transferring knowledge from one learning task to another learning task. It is typically used to learn more efficiently from small datasets when we have access to a much larger dataset that is similar (but different). The strategy is generally to train a model on the large dataset and then use this pre-trained model to help train another model on the task that we really care about:

Let’s use a transfer learning procedure to train a new mushroom classifier on the same 16 examples used in the first chapter:

 

Identifying images from scratch requires many more training examples. For example, the neural network behind the ImageIdentify function has been trained on about 10 million images:

 
 

This model can distinguish between about 4000 objects, but it is not detailed enough for our task:

 
 

It is possible to adapt it to our task though. This network has 24 layers that gradually improve the understanding of the image (see Chapter 11, Deep Learning Methods). In a nutshell, the first layers identify simple things, such as lines and simple shapes, while the last layers can recognize high-level concepts (although not necessarily human-understandable concepts such as “cap color” or “gills type”). We are going to use the first 22 layers of this network as a feature extractor. This means that we are going to preprocess each image with a truncated network to obtain features that are semantically richer than pixel values. We can then train a classifier on top of these new features:

 
 

The classifier can now recognize our mushrooms:

 
 

This classifier obtains about 85% accuracy on a test set constructed from a web image search:

 
 

This is not perfect, but if we were to train directly on the underlying pixel values, we would obtain about 50% accuracy, which is no better than random guessing:

 
 

This is a simple example of transfer learning. We used a network trained on a large dataset in order to extract a useful vector representation (a.k.a. latent features) for our related downstream task. There are other transfer learning techniques that are similar in spirit, and they generally also involve neural networks.

Transfer learning is heavily used to learn from image, audio, and text data. Without transfer learning, it would be hard to accomplish something useful in these domains. Transfer learning is not used much on typical structured data however (bank transactions, sales data, etc.). The reason for that is that structured datasets are somewhat unique, so it is harder to transfer knowledge from one to another. That might not always stay this way in the future; after all, our brains are doing some kind of transfer learning all the time, reusing already-learned concepts in order to learn new things faster.

Self-Supervised Learning

Self-supervised learning generally refers to a supervised learning problem for which the inputs and outputs can be obtained from the data itself, without needing any human annotation. For example, let’s say that we want to predict the next word after a given sequence of English words. To learn how to do this, we can use a dataset of sentences:

We can then transform this dataset into a supervised learning problem:

The input-output pairs are therefore obtained from the data itself. As another example, let’s say we want to learn how to colorize images. We can take images that are already in color and convert them to grayscale to obtain a supervised dataset:

 
 

Again, the prediction task is already present in the data. There are plenty of other applications like this (predicting missing pixel values, predicting the next frame from a video, etc.).

Self-supervised learning is not really a learning paradigm since it refers to how the data was obtained, but it is a useful term to represent this class of problems for which labeling is free. Typically, self-supervised learning is used to learn a representation (see Chapter 7, Dimensionality Reduction), which is then used to tackle a downstream task through a transfer learning procedure. The self-supervised task is then called the pretext task or auxiliary task. Both next-word prediction and image colorization are examples of such pretext tasks that are used for transfer learning.

Supervised learning is about learning to predict from examples of correct predictions.
Unsupervised learning is about modeling unlabeled data.
Clustering, dimensionality reduction, missing value synthesis, and anomaly detection are the typical tasks for unsupervised learning.
Reinforcement learning is about agents learning by themselves how to behave in their environments.
Different learning paradigms typically solve different kinds of tasks.
Supervised learning is more common than unsupervised learning, which is more common than reinforcement learning.
Learning paradigms can be used in conjunction.
Semi-supervised learning is about learning from supervised and unsupervised data.
Online learning is about continuously learning from a stream of data.
Active learning is about learning from a teacher by asking questions.
Transfer learning is about transferring knowledge from one learning task to another learning task.
Vocabulary
Supervised Learning
unsupervised learning learning from data examples that do not have labels
clustering separating data examples into groups
dimensionality reduction reducing the number of variables in a dataset while preserving some properties of the data
anomaly detection identifying examples that are anomalous
anomaly
outlier
data example that substantially differs from other data examples
imputation filling in missing values of a dataset
generative modeling learning to generate synthetic data examples
Reinforcement Learning
reinforcement learninglearning by interacting with an environment
environmentexternal system that the reinforcement learning agent interacts with
actionsthings that the agent does in the environment
observationsfeedback given by the environment
rewardspecial observation given by the environment to inform the agent if the task is well done
policymodel predicting which action to make given previous actions and observations
model-based
reinforcement learning
reinforcement learning where a model is trained to simulate the real environment
Exercises
2.1Find which paradigm can be used to tackle the applications described in Chapter 1.
Tech Notes
Unsupervised vs. Supervised

In this book, we define supervised learning as learning from input-output pairs and unsupervised learning as learning from unlabeled data. This distinction only concerns the form of the data and not the type of data (image, text, etc.) or the method used. Researchers and expert practitioners can have a slightly different (and more fuzzy) definition, which is more related to which method is used to solve the task. As an example, imagine the goal is to generate images given their class using a dataset such as:

 

Technically, this is a supervised problem, but many would call it unsupervised because the labels have many degrees of freedom (the pixels), which means that the methods used to tackle such a task are very similar to the methods used in a pure unsupervised setting (such as learning the distribution of images without classes). Both definitions are useful depending on the context.

拔完火罐要注意什么 圆脸适合什么发型 过年给老人买什么 甲减吃什么盐 淋巴细胞升高说明什么
气压治疗是什么 人性是什么 扁桃体有什么用 心脏24小时监测叫什么 盆腔b超检查什么
雪白的什么 梦见打死蛇是什么预兆 查肝挂什么科 流清鼻涕打喷嚏吃什么药 手指甲有月牙代表什么
珩是什么意思 心脏呈逆钟向转位什么意思 肚脐右侧是什么器官 男人蛋皮痒用什么药 大排畸和四维的区别是什么
精索静脉曲张挂什么科hcv9jop1ns2r.cn 梦见洗澡是什么意思hcv7jop6ns4r.cn 附件囊肿吃什么药可以消除hcv9jop0ns0r.cn 送护士女朋友什么礼物hcv8jop9ns9r.cn 叶酸是什么维生素zhongyiyatai.com
唇炎涂什么药膏hcv8jop5ns8r.cn 产检请假属于什么假hcv8jop4ns3r.cn 小孩睡不着觉是什么原因bfb118.com 乾隆和康熙是什么关系hcv8jop3ns6r.cn facebook什么意思hcv7jop6ns6r.cn
总是放屁是什么原因hcv7jop9ns5r.cn 眼睛发黄是什么原因hcv8jop6ns4r.cn 梦见山体滑坡是什么意思hcv8jop9ns8r.cn 玉米热量高为什么还减肥hcv7jop9ns4r.cn 梦见被雨淋是什么意思hcv7jop6ns4r.cn
夏天的诗句有什么hcv8jop3ns3r.cn 羞明畏光是什么意思hcv7jop9ns1r.cn 脖子左侧疼是什么原因hcv9jop0ns6r.cn 虫草能治什么病onlinewuye.com 什么奶粉最好hcv8jop1ns9r.cn
百度