DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Summary
This communication is a First Office Action Non-Final Rejection on the merits.
Claim 1 is currently pending and considered below.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites a method claim but contain an apparatus as its element. It is a hybrid claim and it is unclear if the claim is indeed directed to a method claim or an apparatus claim. The overall structure seems to lean toward a method claim and the Examiner suggest amending the apparatus part of the claim into a method claim form. Correction is required.
The claim 1 is generally narrative and indefinite, failing to conform with current U.S. practice. They appear to be a direct submission of research paper and are replete with errors.
For example:
In Claim 1, line 4, “an episodic memory model”, lacks antecedent bases in the claim, since episodic memory model was previously been claimed in line 1, and it is not clear if “an episodic memory model” is referring to the “episodic memory model” in line 1.
In Claim 1, the phrase "MATLAB” is a trademark. The trademark or trade name is used in a claim as a limitation to identify or describe a particular material or product, the claim does not comply with the requirements of the 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Ex parte Simpson, 218 USPQ 1020 (Bd. App. 1982). See also Eli Lilly & Co. v. Apotex, Inc., 837 Fed. Appx. 780, 784-85, 2020 USPQ2d 11531 (Fed. Cir. 2020).
In Claim 1, the phrase “generated memory units,” has nothing to refer to. There is no memory units is generated prior to the phrase “generated memory units.” It is not clear what and how the memory units is generated. Clarification is required.
The following errors explicitly found in Claim 1 are given way of examples only and not inclusive of all errors. Applicant should carefully review and amend all the claims to ensure all errors are corrected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1 is a method claim but claim a apparatus as a claim element (a hardware platform which includes remote control section and a mobile robot section). The claim is a hybrid claim and claiming two statutory categories. Correction is required.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an [AltContent: connector]abstract idea without significantly more.
[AltContent: connector]101 Analysis – Step 1
[AltContent: connector]Claim 1 is directed to a method of constructing episodic memory model based on rat brain visual pathway. Therefore, claim 1 is within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes.
Independent claim 1 includes limitations that recite an abstract idea (emphasized below)
and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites:
A method for constructing episodic memory model based on rat brain visual pathway and entorhinal-hippocampal cognitive mechanism, wherein structure and information flow of hardware components used by the method are as follows:
a hardware operation platform equipped with an episodic memory model that mimics visual pathway and entorhinal-hippocampal structure of a rat's brain comprises a remote control section and a mobile robot section, the remote control section is implemented through a wireless keyboard, while the mobile robot section includes a robotic chassis and a computer, the robotic chassis includes motors, encoders, and gyroscope, the motors are used to drive a robot moving within an environment, the encoders are used to obtain speed information of the robot during movement of the robot, and the gyroscope is used to obtain directional information of the robot during movement of the robot, a RGB camera and a depth camera are used to acquire episodic information in the direction of the movement of the robot, respectively; the computer serves as a main controller for the robot and is equipped with ROS operating system and MATLAB; the MATLAB is used to run various computational models, while the ROS operating system serves as an information bridge between the computational models and the robotic chassis; during operation of the robot, the computer, the RGB camera and the depth camera are placed on top of the robotic chassis, with the computer connected to both the robotic chassis and the RGB camera and the depth camera via data cables, and the computer is wirelessly connected to the wireless keyboard;
a process by which the robot constructs an episodic cognitive map is as follows: initially, an operator provides a movement instruction to the robot via the wireless keyboard, guiding the robot to navigate through the environment; during the movement of the robot, information is collected through various sensors at regular intervals, including the encoders, the gyroscope, the RGB camera, and the depth camera; the speed information is collected through the encoders, and the directional information is collected through the gyroscope; subsequently, the encoders and the gyroscope transmit obtained motion information to the computer through the data cables, and then pass the motion information on to an entorhinal-hippocampal CA3 neural computing model through the ROS operating system to obtain position and direction information of the robot in the environment; the RGB camera and the depth camera are responsible for capturing RGB images and depth images in the direction of the movement of the robot, which are then transferred via the data cables to the computer and input into "where pathway" and "what pathway" computational models; these models are used to calculate orientation information of objects in the environment relative to the robot; subsequently, attributes and positional information of external environmental objects from the two visual pathways are fused with the positional and orientational information outputted by the entorhinal-hippocampal CA3 neurocomputational model; this fused information is stored in a memory unit model with topological structural relationships; episodic information is utilized to correct path integration errors during an exploration process of the robot, thereby constructing an episodic cognitive map that represents the environment; generated memory units and episodic cognitive map are stored on a hard drive of the computer;
the method comprising the following steps:
provide the robot installed with the entorhinal-hippocampal CA3 neurocomputational model, a visual pathway computing model and a similar scene measurement algorithm;
step 1. During an exploration process, the robot explores environment, collects RGB image information of the environment through a camera, and collects head-direction angle and speed information of the robot through a gyroscope and an encoder;
step 2. input the head-direction angle and speed information into an entorhinal- hippocampus CA3 neural computing model to obtain the robot's position information of the robot in the environment;
step 3. input the RGB image information into a visual pathway computing model to obtain environmental features within robot's field of view, including the number of objects in the environment, attribute information of the objects, angles of the objects relative to the robot, and distances between objects and the robot;
step 4. construct cognitive nodes: the robot constructs a new cognitive node every time the robot moves, and continuously constructs cognitive nodes in the exploration process;
there are topological connections between adjacent cognitive nodes;
among the cognitive nodes, the i-th cognitive nodes are represented by which are used to store current scenario information, position, and head-direction angle; a mathematical expression of
PNG
media_image1.png
32
24
media_image1.png
Greyscale
is as follows;
PNG
media_image2.png
57
658
media_image2.png
Greyscale
wherein,
PNG
media_image3.png
32
32
media_image3.png
Greyscale
represents the robot's head-direction angle at the i-th cognitive node,
PNG
media_image4.png
48
104
media_image4.png
Greyscale
represents the robot's position in the environment at the i-th cognitive node,
PNG
media_image5.png
38
237
media_image5.png
Greyscale
represents the environmental features within the robot's field of view at the i-th cognitive node, and
PNG
media_image6.png
41
67
media_image6.png
Greyscale
represents the number of objects at the i-th cognitive node, pij represents the attribute of the j-th object at the i-th cognitive node, represents the orientation angle of the j-th object at the i-th cognitive node relative to the robot, and dij represents a distance between the j-th object at the i-th cognitive node and the robot;
step 5. construct an episodic cognition map of environmental expression to guide movement of the robot in the environment;
step 2 further comprises the following steps:
s1.1 input the head-direction angle and speed information of robot into a firing model of stripe cells to obtain a firing rate of stripe cells;
s1.2 input the firing rate of stripe cells into a firing model of grid cells to obtain a firing rate of grid cells;
s1.3 input the firing rate of grid cells into a firing model of dentate gyrus neurons to obtain a firing rate of dentate gyrus neurons, and then input the firing rate of grid cells and the firing rate of dentate gyrus neurons into hippocampal CA3 place cell firing model, obtain a firing rate of hippocampal CA3 place cells;
s1.4 calculate the position of the robot in the environment based on the firing rate of hippocampal CA3 place cells; a mathematical expression of the firing rate of stripe cells is given as:
PNG
media_image7.png
40
607
media_image7.png
Greyscale
in formula (2), t represents the time at the current moment, f represents an oscillation frequency of neuron cell body, fd represents an oscillation frequency of neuron dendrites;
PNG
media_image8.png
34
79
media_image8.png
Greyscale
represents a path integral along a preferred direction angle
PNG
media_image9.png
28
45
media_image9.png
Greyscale
of the stripe cells, where
PNG
media_image10.png
34
38
media_image10.png
Greyscale
represents a component velocity of the rat at the preferred direction angle
PNG
media_image9.png
28
45
media_image9.png
Greyscale
, and its mathematical expression is as follows:
PNG
media_image11.png
41
462
media_image11.png
Greyscale
in formula (3), v represents a current moving speed of the robot, and
PNG
media_image12.png
26
20
media_image12.png
Greyscale
represents a current head-direction angle of the robot, a mathematical expression of neuron dendritic oscillation frequency fd can be obtained as:
PNG
media_image13.png
40
469
media_image13.png
Greyscale
where B1 is a reciprocal of a wavelength of a stripe wave, and the grid cell firing model is obtained by superimposing the firing rates of three stripe cells with a difference of 120º in the preferred direction, the specific mathematical expression is:
PNG
media_image14.png
34
285
media_image14.png
Greyscale
PNG
media_image15.png
24
481
media_image15.png
Greyscale
in formula (5), values of the three stripe cell preferred direction angles
PNG
media_image9.png
28
45
media_image9.png
Greyscale
are
PNG
media_image16.png
32
262
media_image16.png
Greyscale
respectively, where
PNG
media_image17.png
31
25
media_image17.png
Greyscale
represents a deviation angle of the stripe cells, and its value ranges from random selection within 0º- 360º;
PNG
media_image17.png
31
25
media_image17.png
Greyscale
also represents an orientation angle of a grid field; after the firing rate of grid cells is obtained, the firing rate of grid cells is used as a forward input signal of the dentate gyrus neurons, and the mathematical expression of the excitatory
PNG
media_image18.png
39
68
media_image18.png
Greyscale
received by the i-th dentate gyrus neuron is:
PNG
media_image19.png
43
465
media_image19.png
Greyscale
in formula (6), i and j represent numbers of dentate gyrus neurons and grid cells respectively, gj(t) represents the firing rate of the j-th grid cell, and ngrid represents the number of grid cells; W represents an excitatory input connection weight matrix, where Wj represents a connection weight from the j-th grid cell to the i-th dentate gyrus neuron, and the calculation formula of each connection weight is as follows:
PNG
media_image20.png
36
438
media_image20.png
Greyscale
in formula (7), s represents synapse size, and a size of s is randomly selected in the range of (0 - 0.2)µm2; each size of s corresponds to its proportion in all synapses P(s) roughly obeys the following mathematical expression:
PNG
media_image21.png
37
507
media_image21.png
Greyscale
in formula (8), A = 100.7, B = 0.02, σ1 = 0.022, σ2 = 0.018,σ3 = 0.15; the excitatory input connection weight matrix W can be assigned by formula (7) and formula (8), so as to realize the excitatory transmission from grid cells to dentate gyrus neurons; firing activity of dentate gyrus neurons within a given spatial region is subject to a WTA learning rule that describes competing activity arising from gamma-frequency feedback inhibition; the mathematical expression of the firing rate of dentate gyrus neurons is:
PNG
media_image22.png
36
583
media_image22.png
Greyscale
in formula (9),
PNG
media_image23.png
36
75
media_image23.png
Greyscale
represents the firing rate of dentate gyrus neurons, k1 is 0.1,
PNG
media_image24.png
36
41
media_image24.png
Greyscale
represents a maximum value of grid cell forward input received by dentate gyrus neurons; H(x) is a rectification function, when x > 0, H(x) = 1; otherwise, when x <0, the function value is 0; and the excitatory input signal from the dentate gyrus neuron to the hippocampal CA3 place cell is as follows:
PNG
media_image25.png
48
527
media_image25.png
Greyscale
in formula (10), i and j represent serial numbers of hippocampal CA3 place cells and dentate gyrus neurons respectively, and ndentate represents the number of dentate gyrus neurons, which is set to 1000;
PNG
media_image26.png
35
74
media_image26.png
Greyscale
represents a maximum firing rate of neurons in the dentate gyrus; since
PNG
media_image27.png
32
94
media_image27.png
Greyscale
is always greater than zero, dividing it by the maximum firing rate is similar to normalization; Ω represents an excitatory input connection weight matrix, where Ωij represents the connection weight from the j-th dentate gyrus neuron to the i-th hippocampal CA3 place cell, and a value of Ωij ranges from 0-1; distribution function of the connection weight value is defined as anon-negative Gaussian distribution, and the mathematical expression is as follows:
PNG
media_image28.png
62
454
media_image28.png
Greyscale
in formula (11), A2 = 1.033, μ = 24, σ = 13; the excitatory input connection weight matrix Ω can be assigned by formula (11), so as to realize the excitatory transmission from the dentate gyrus neurons to the hippocampal CA3 place cells; the hippocampal CA3 place cells of the hippocampus receive forward input from the neurons of the entorhinal cortex and the dentate gyrus at the same time, so the mathematical expression of the total excitatory input signal received by the hippocampal CA3 place cells is:
PNG
media_image29.png
50
533
media_image29.png
Greyscale
in formula (12),
PNG
media_image30.png
35
208
media_image30.png
Greyscale
are respectively forward input signals of grid cells and dentate gyrus neurons, and
PNG
media_image31.png
50
69
media_image31.png
Greyscale
represents an average strength of grid cell forward input signals, and its mathematical expression is:
PNG
media_image32.png
47
509
media_image32.png
Greyscale
in formula (13),
PNG
media_image33.png
30
45
media_image33.png
Greyscale
represents the number of hippocampal CA3 place cells, and the mathematical expression of the hippocampal CA3 place cell firing model is as follows:
PNG
media_image34.png
34
586
media_image34.png
Greyscale
in formula (14),
PNG
media_image35.png
34
42
media_image35.png
Greyscale
represents a maximum value of total excitation input signal received by hippocampal CA3 place cells, and a value of k2 is 0.1;s1.4 further includes the following steps:
construct a place cell plate model which is capable for encoding a given spatial region; a shape of the cell plate is a square, and a side length of the cell plate is Nx, and obtain position coordinates of the robot in given spatial region; wherein, a position of current robot in the coding space region of current place cell plate is calculated by formula (15):
PNG
media_image36.png
130
585
media_image36.png
Greyscale
in formula (15),
PNG
media_image37.png
29
103
media_image37.png
Greyscale
represent abscissa and ordinate of an excitatory activity packet on the place cell plate at time t, respectively, and
PNG
media_image38.png
32
32
media_image38.png
Greyscale
represent the firing rate of place cells in row i and column j on the cell plate at time t, which is calculated according to the hippocampal CA3 place cell firing rate;
s1.4.2, using physiological characteristic of border cells with specific firing effects on area boundary, realize periodic reset of the firing of stripe cells, and obtain the position coordinates of the robot in any size space area: the specific implementation method is as follows: at the initial moment, the rat is set to be located in a center of the square area encoded by the place cell plate, and when the rat reaches any boundary of the given encoding area space, a path integration
PNG
media_image39.png
31
82
media_image39.png
Greyscale
of all stripe cells in the direction of preferred angle
PNG
media_image40.png
26
41
media_image40.png
Greyscale
is set to zero, so that the rat is in a center of a positive direction area coded by the place cell plate after reset; in this way, every time the firing reset of stripe cells is completed, the place cell plate can immediately generate a code for a new spatial region, thereby completing the robot's position cognition for any size space;
an initial position of the robot movement is located in the center of the square area encoded by the place cell plate; a physical coordinate system is defined with the initial movement position as origin, and the horizontal direction of place cell plate is positive direction of X-axis; the physical coordinate systems mentioned below are all for this coordinate system; then the mathematical expression of the position coordinates
PNG
media_image41.png
49
107
media_image41.png
Greyscale
of the robot in any size space area is as follows:
PNG
media_image42.png
49
587
media_image42.png
Greyscale
in formula (16), β is a proportional coefficient for transforming the coordinates on the place cell plate to the real position coordinates, and its value is the ratio of side length L of the square coding area to the side length Nχ of the place cell plate; Qx and Qy respectively represent the horizontal and vertical coordinates of the rat in any size space area when the place cell plate was reset last time, which provides accurate position information for the subsequent construction of cognitive node;
a visual pathway computing model includes "what pathway" and "where pathway", where the "what pathway" model adopts the DPM algorithm, and its input is the input of environmental RGB image information, which is used to obtain the number and attribute information of objects in the environment;
the "where pathway" model is used to obtain the orientation angle and distance information of the object relative to the robot, including the direction relative to the robot and the distance from the robot;
the "where pathway" working process is: when the robot is exploring in the environment, the PID algorithm is adopted for closed-loop control of the robot's rotation speed, so that the object to be detected is placed in the center of the field of vision; the robot will face a new scene every time it moves, and i is defined as the scene sequence number; firstly, the number of objects in the i-th scene is identified by DPM algorithm, set as
PNG
media_image43.png
40
66
media_image43.png
Greyscale
the current head-direction angle is
PNG
media_image44.png
29
27
media_image44.png
Greyscale
and the sequence number of objects currently detected in the i-th scene is j;
then, the orientation angle information of each object is solved successively; the mathematical expression of the current pixel deviation
PNG
media_image45.png
27
125
media_image45.png
Greyscale
is:
PNG
media_image46.png
27
596
media_image46.png
Greyscale
PNG
media_image47.png
23
120
media_image47.png
Greyscale
represents a pixel value in the center of the field of view,
PNG
media_image48.png
23
116
media_image48.png
Greyscale
represents an average position of the left and right boundaries of the object to be detected in the image, and the mathematical expression of the given value of the current rotation speed ω obtained by the PID algorithm is:
PNG
media_image49.png
42
683
media_image49.png
Greyscale
when the object to be detected is placed in the center of the field of view, record the orientation angle @P of the robot head at this time, then the direction angle of the j-th object in the i-th scene relative to the robot before rotation,
PNG
media_image50.png
30
122
media_image50.png
Greyscale
, at the same time, the distance dij between the robot and object to be measured is obtained by the depth camera; through the above operations, the orientation angle and distance information of the j-th object relative to the robot at the current moment can be obtained;
after the information of all objects in the current scene is obtained, the head- direction angle of the robot is rotated to
PNG
media_image51.png
33
34
media_image51.png
Greyscale
again to continue the exploration and cognition in the environment;
step 5 further comprises the following steps:
S5.1 through a similar scene measurement algorithm, establish a topological connection relationship between cognitive nodes with similar scenario information, so as to expand the topological connection relationship between adjacent cognitive nodes;
S5.2 use the topological relationship among all cognitive nodes to correct the cumulative error of the head-direction angle and position of the mobile robot during the exploration process, and construct a topological cognitive map;
S5.3 calculate the position of environmental objects in the physical coordinate system and calibrate the position of environment objects in the topological map to realize the construction of the environmental episodic cognitive map;
a specific algorithm for measuring similar scenes is as follows:
set two cognitive nodes ea and eb, first judge whether the number of objects in the two scenarios is the same and whether the attributes of the corresponding objects are consistent, if one of the above conditions is not satisfied, it is judged that the two scenarios do not match; otherwise, by measuring whether the orientation angle information of each object in the scenario is consistent, the mathematical expression of the measurement function
PNG
media_image52.png
24
92
media_image52.png
Greyscale
is:
PNG
media_image53.png
71
603
media_image53.png
Greyscale
in formula (19),
PNG
media_image54.png
31
105
media_image54.png
Greyscale
represent weights of direction information and distance information respectively,
PNG
media_image55.png
22
75
media_image55.png
Greyscale
= 1, set a matching threshold as St, and select an appropriate value according to the actual situation; when a value of the metric function is less than the matching threshold, it is judged that the two scenes match, and at this time the topological relationship between cognitive nodes ea and eb is established;
S5.2 specifically includes:
it is known that the current cognitive node is ei, and the cognitive node associated with it is ek; this represents that there is a topological relationship between node ei and node ek; then the mathematical expression of the pose correction of cognitive nodes ei and ek is as follows:
firstly, calculate the change amount of
PNG
media_image56.png
23
200
media_image56.png
Greyscale
of the cognitive nodes, which is shown in formula (20);
PNG
media_image57.png
146
601
media_image57.png
Greyscale
in formula (20),
PNG
media_image58.png
29
243
media_image58.png
Greyscale
represent the horizontal and vertical coordinates of the place field's center corresponding to the cognitive points ei and ek respectively, dik represents the distance between the place field's center corresponding to the cognitive point ei and ek ,
PNG
media_image59.png
39
110
media_image59.png
Greyscale
respectively represents the head-direction angles at cognitive points ei and ek; after the change amount is obtained, the corrected node parameters can be iteratively calculated step by step according to the change amount, and the relevant mathematical expressions are shown in formula (21) and (22); in formula (21) and (22), t and t + 1 represent the time before and after each iterative operation, respectively, and δ represents the correction rate of the cumulative error;
PNG
media_image60.png
180
583
media_image60.png
Greyscale
a map convergence criterion algorithm is added after S5.2, to improve the real-time performance of the map construction process, define the map convergence at time t as Δd(t), and its mathematical expression is as follows:
PNG
media_image61.png
76
692
media_image61.png
Greyscale
in formula (23), nsum represents the total number of current cognitive nodes, and ni represents the number of nodes associated with cognitive node i; set the scale factor of the convergence criterion is σ when Δd(t) - Δd(t + 1) < σΔd(t + 1), it is judged that there is no need to continue the map update iteration at this time; otherwise, continue to perform the update iteration of cognitive map construction;
the specific steps in step S5.3 are as follows:
after obtaining the topological cognitive map and scenario information of the environment, the two can be integrated to obtain the episodic cognitive map of the environment, the specific method is as follows: according to the position of the robot in physical coordinate system and the orientation angle and distance information of the object relative to the robot obtained above, the positions of all objects in the physical coordinate system can be calculated; insert each object in physical coordinate system containing the topological map according to its attributes and position information to obtain the episodic cognitive map of the environment representation.
The examiner submits that the foregoing bolded limitation(s) constitute a “mathematical concept” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “a mathematical expression of the firing rate of stripe cells is given as: …” in the context of this claim encompasses a mathematical equation that is directed to a specific mathematical concept. Accordingly, the claim recites at least one abstract idea.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”
In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”)
A method for constructing episodic memory model based on rat brain visual pathway and entorhinal-hippocampal cognitive mechanism, wherein structure and information flow of hardware components used by the method are as follows:
a hardware operation platform equipped with an episodic memory model that mimics visual pathway and entorhinal-hippocampal structure of a rat's brain comprises a remote control section and a mobile robot section, the remote control section is implemented through a wireless keyboard, while the mobile robot section includes a robotic chassis and a computer, the robotic chassis includes motors, encoders, and gyroscope, the motors are used to drive a robot moving within an environment, the encoders are used to obtain speed information of the robot during movement of the robot, and the gyroscope is used to obtain directional information of the robot during movement of the robot, a RGB camera and a depth camera are used to acquire episodic information in the direction of the movement of the robot, respectively; the computer serves as a main controller for the robot and is equipped with ROS operating system and MATLAB; the MATLAB is used to run various computational models, while the ROS operating system serves as an information bridge between the computational models and the robotic chassis; during operation of the robot, the computer, the RGB camera and the depth camera are placed on top of the robotic chassis, with the computer connected to both the robotic chassis and the RGB camera and the depth camera via data cables, and the computer is wirelessly connected to the wireless keyboard;
a process by which the robot constructs an episodic cognitive map is as follows: initially, an operator provides a movement instruction to the robot via the wireless keyboard, guiding the robot to navigate through the environment; during the movement of the robot, information is collected through various sensors at regular intervals, including the encoders, the gyroscope, the RGB camera, and the depth camera; the speed information is collected through the encoders, and the directional information is collected through the gyroscope; subsequently, the encoders and the gyroscope transmit obtained motion information to the computer through the data cables, and then pass the motion information on to an entorhinal-hippocampal CA3 neural computing model through the ROS operating system to obtain position and direction information of the robot in the environment; the RGB camera and the depth camera are responsible for capturing RGB images and depth images in the direction of the movement of the robot, which are then transferred via the data cables to the computer and input into "where pathway" and "what pathway" computational models; these models are used to calculate orientation information of objects in the environment relative to the robot; subsequently, attributes and positional information of external environmental objects from the two visual pathways are fused with the positional and orientational information outputted by the entorhinal-hippocampal CA3 neurocomputational model; this fused information is stored in a memory unit model with topological structural relationships; episodic information is utilized to correct path integration errors during an exploration process of the robot, thereby constructing an episodic cognitive map that represents the environment; generated memory units and episodic cognitive map are stored on a hard drive of the computer;
the method comprising the following steps:
provide the robot installed with the entorhinal-hippocampal CA3 neurocomputational model, a visual pathway computing model and a similar scene measurement algorithm;
step 1. a robot explores the environment, collects RGB image information of the environment through a camera, and collects head-direction angle and speed information of the robot through gyroscope and encoder;
step 2. input the head-direction angle and speed information into an entorhinal- hippocampus CA3 neural computing model to obtain the robot's position information in the environment;
step 3. input the RGB image information into a visual pathway computing model to obtain environmental features within robot's field of view, including the number of objects in the environment, attribute information of the objects, angles of the objects relative to the robot, and distances between objects and the robot;
step 4. construct cognitive nodes: the robot constructs a new cognitive node every time it moves, and continuously constructs cognitive nodes in the process of exploring environment; there are topological connections between adjacent cognitive nodes. among them, the i-th cognitive nodes are represented by which are used to store current scenario information, position, and head-direction angle; a mathematical expression of
PNG
media_image1.png
32
24
media_image1.png
Greyscale
is as follows;
PNG
media_image2.png
57
658
media_image2.png
Greyscale
wherein,
PNG
media_image3.png
32
32
media_image3.png
Greyscale
represents the robot's head-direction angle at the i-th cognitive node,
PNG
media_image4.png
48
104
media_image4.png
Greyscale
represents the robot's position in the environment at the i-th cognitive node,
PNG
media_image5.png
38
237
media_image5.png
Greyscale
represents the environmental features within the robot's field of view at the i-th cognitive node, and
PNG
media_image6.png
41
67
media_image6.png
Greyscale
represents the number of objects at the i-th cognitive node, pij represents the attribute of the j-th object at the i-th cognitive node, represents the orientation angle of the j-th object at the i-th cognitive node relative to the robot, and dij represents a distance between the j-th object at the i-th cognitive node and the robot;
step 5. construct an episodic cognition map of environmental expression; step 2 further comprises the following steps:
s1.1 input the head-direction angle and speed information of robot into a firing model of stripe cells to obtain a firing rate of stripe cells;
s1.2 input the firing rate of stripe cells into a firing model of grid cells to obtain a firing rate of grid cells;
s1.3 input the firing rate of grid cells into a firing model of dentate gyrus neurons to obtain a firing rate of dentate gyrus neurons, and then input the firing rate of grid cells and the firing rate of dentate gyrus neurons into hippocampal CA3 place cell firing model, obtain a firing rate of hippocampal CA3 place cells;
s1.4 calculate the position of the robot in the environment based on the firing rate of hippocampal CA3 place cells; a mathematical expression of the firing rate of stripe cells is given as:
PNG
media_image7.png
40
607
media_image7.png
Greyscale
in formula (2), t represents the time at the current moment, f represents an oscillation frequency of neuron cell body, fd represents an oscillation frequency of neuron dendrites;
PNG
media_image8.png
34
79
media_image8.png
Greyscale
represents a path integral along a preferred direction angle
PNG
media_image9.png
28
45
media_image9.png
Greyscale
of the stripe cells, where
PNG
media_image10.png
34
38
media_image10.png
Greyscale
represents a component velocity of the rat at the preferred direction angle
PNG
media_image9.png
28
45
media_image9.png
Greyscale
, and its mathematical expression is as follows:
PNG
media_image11.png
41
462
media_image11.png
Greyscale
in formula (3), v represents a current moving speed of the robot, and
PNG
media_image12.png
26
20
media_image12.png
Greyscale
represents a current head-direction angle of the robot, a mathematical expression of neuron dendritic oscillation frequency fd can be obtained as:
PNG
media_image13.png
40
469
media_image13.png
Greyscale
where B1 is a reciprocal of a wavelength of a stripe wave, and the grid cell firing model is obtained by superimposing the firing rates of three stripe cells with a difference of 120º in the preferred direction, the specific mathematical expression is:
PNG
media_image14.png
34
285
media_image14.png
Greyscale
PNG
media_image15.png
24
481
media_image15.png
Greyscale
in formula (5), values of the three stripe cell preferred direction angles
PNG
media_image9.png
28
45
media_image9.png
Greyscale
are
PNG
media_image16.png
32
262
media_image16.png
Greyscale
respectively, where
PNG
media_image17.png
31
25
media_image17.png
Greyscale
represents a deviation angle of the stripe cells, and its value ranges from random selection within 0º- 360º;
PNG
media_image17.png
31
25
media_image17.png
Greyscale
also represents an orientation angle of a grid field; after the firing rate of grid cells is obtained, it is used as a forward input signal of the dentate gyrus neurons, and the mathematical expression of the excitatory
PNG
media_image18.png
39
68
media_image18.png
Greyscale
received by the i-th dentate gyrus neuron is:
PNG
media_image19.png
43
465
media_image19.png
Greyscale
in formula (6), i and j represent numbers of dentate gyrus neurons and grid cells respectively, gj(t) represents the firing rate of the j-th grid cell, and ngrid represents the number of grid cells; W represents an excitatory input connection weight matrix, where Wj represents a connection weight from the j-th grid cell to the i-th dentate gyrus neuron, and the calculation formula of each connection weight is as follows:
PNG
media_image20.png
36
438
media_image20.png
Greyscale
in formula (7), s represents synapse size, and a size of s is randomly selected in the range of (0 - 0.2)µm2; each size of s corresponds to its proportion in all synapses P(s) roughly obeys the following mathematical expression:
PNG
media_image21.png
37
507
media_image21.png
Greyscale
in formula (8), A = 100.7, B = 0.02, σ1 = 0.022, σ2 = 0.018,σ3 = 0.15; the excitatory input connection weight matrix W can be assigned by formula (7) and formula (8), so as to realize the excitatory transmission from grid cells to dentate gyrus neurons; firing activity of dentate gyrus neurons within a given spatial region is subject to a WTA learning rule that describes competing activity arising from gamma-frequency feedback inhibition; the mathematical expression of the firing rate of dentate gyrus neurons is:
PNG
media_image22.png
36
583
media_image22.png
Greyscale
in formula (9),
PNG
media_image23.png
36
75
media_image23.png
Greyscale
represents the firing rate of dentate gyrus neurons, k1 is 0.1,
PNG
media_image24.png
36
41
media_image24.png
Greyscale
represents a maximum value of grid cell forward input received by dentate gyrus neurons; H(x) is a rectification function, when x > 0, H(x) = 1; otherwise, when x <0, the function value is 0; and the excitatory input signal from the dentate gyrus neuron to the hippocampal CA3 place cell is as follows:
PNG
media_image25.png
48
527
media_image25.png
Greyscale
in formula (10), i and j represent serial numbers of hippocampal CA3 place cells and dentate gyrus neurons respectively, and ndentate represents the number of dentate gyrus neurons, which is set to 1000;
PNG
media_image26.png
35
74
media_image26.png
Greyscale
represents a maximum firing rate of neurons in the dentate gyrus; since
PNG
media_image27.png
32
94
media_image27.png
Greyscale
is always greater than zero, dividing it by the maximum firing rate is similar to normalization; Ω represents an excitatory input connection weight matrix, where Ωij represents the connection weight from the j-th dentate gyrus neuron to the i-th hippocampal CA3 place cell, and a value of Ωij ranges from 0-1; distribution function of the connection weight value is defined as anon-negative Gaussian distribution, and the mathematical expression is as follows:
PNG
media_image28.png
62
454
media_image28.png
Greyscale
in formula (11), A2 = 1.033, μ = 24, σ = 13; the excitatory input connection weight matrix Ω can be assigned by formula (11), so as to realize the excitatory transmission from the dentate gyrus neurons to the hippocampal CA3 place cells; the hippocampal CA3 place cells of the hippocampus receive forward input from the neurons of the entorhinal cortex and the dentate gyrus at the same time, so the mathematical expression of the total excitatory input signal received by the hippocampal CA3 place cells is:
PNG
media_image29.png
50
533
media_image29.png
Greyscale
in formula (12),
PNG
media_image30.png
35
208
media_image30.png
Greyscale
are respectively forward input signals of grid cells and dentate gyrus neurons, and
PNG
media_image31.png
50
69
media_image31.png
Greyscale
represents an average strength of grid cell forward input signals, and its mathematical expression is:
PNG
media_image32.png
47
509
media_image32.png
Greyscale
in formula (13),
PNG
media_image33.png
30
45
media_image33.png
Greyscale
represents the number of hippocampal CA3 place cells, and the mathematical expression of the hippocampal CA3 place cell firing model is as follows:
PNG
media_image34.png
34
586
media_image34.png
Greyscale
in formula (14),
PNG
media_image35.png
34
42
media_image35.png
Greyscale
represents a maximum value of total excitation input signal received by hippocampal CA3 place cells, and a value of k2 is 0.1;s1.4 further includes the following steps:
construct a place cell plate model which is capable for encoding a given spatial region; a shape of the cell plate is a square, and a side length of the cell plate is Nx, and obtain position coordinates of the robot in given spatial region; wherein, a position of current robot in the coding space region of current place cell plate is calculated by formula (15):
PNG
media_image36.png
130
585
media_image36.png
Greyscale
in formula (15),
PNG
media_image37.png
29
103
media_image37.png
Greyscale
represent abscissa and ordinate of an excitatory activity packet on the place cell plate at time t, respectively, and
PNG
media_image38.png
32
32
media_image38.png
Greyscale
represent the firing rate of place cells in row i and column j on the cell plate at time t, which is calculated according to the hippocampal CA3 place cell firing rate;
s1.4.2, using physiological characteristic of border cells with specific firing effects on area boundary, realize periodic reset of the firing of stripe cells, and obtain the position coordinates of the robot in any size space area: the specific implementation method is as follows: at the initial moment, the rat is set to be located in a center of the square area encoded by the place cell plate, and when the rat reaches any boundary of the given encoding area space, a path integration
PNG
media_image39.png
31
82
media_image39.png
Greyscale
of all stripe cells in the direction of preferred angle
PNG
media_image40.png
26
41
media_image40.png
Greyscale
is set to zero, so that the rat is in a center of a positive direction area coded by the place cell plate after reset; in this way, every time the firing reset of stripe cells is completed, the place cell plate can immediately generate a code for a new spatial region, thereby completing the robot's position cognition for any size space;
an initial position of the robot movement is located in the center of the square area encoded by the place cell plate; a physical coordinate system is defined with the initial movement position as origin, and the horizontal direction of place cell plate is positive direction of X-axis; the physical coordinate systems mentioned below are all for this coordinate system; then the mathematical expression of the position coordinates
PNG
media_image41.png
49
107
media_image41.png
Greyscale
of the robot in any size space area is as follows:
PNG
media_image42.png
49
587
media_image42.png
Greyscale
in formula (16), β is a proportional coefficient for transforming the coordinates on the place cell plate to the real position coordinates, and its value is the ratio of side length L of the square coding area to the side length Nχ of the place cell plate; Qx and Qy respectively represent the horizontal and vertical coordinates of the rat in any size space area when the place cell plate was reset last time, which provides accurate position information for the subsequent construction of cognitive node;
a visual pathway computing model includes "what pathway" and "where pathway", where the "what pathway" model adopts the DPM algorithm, and its input is the input of environmental RGB image information, which is used to obtain the number and attribute information of objects in the environment;
the "where pathway" model is used to obtain the orientation angle and distance information of the object relative to the robot, including the direction relative to the robot and the distance from the robot;
the "where pathway" working process is: when the robot is exploring in the environment, the PID algorithm is adopted for closed-loop control of the robot's rotation speed, so that the object to be detected is placed in the center of the field of vision; the robot will face a new scene every time it moves, and i is defined as the scene sequence number; firstly, the number of objects in the i-th scene is identified by DPM algorithm, set as
PNG
media_image43.png
40
66
media_image43.png
Greyscale
the current head-direction angle is
PNG
media_image44.png
29
27
media_image44.png
Greyscale
and the sequence number of objects currently detected in the i-th scene is j;
then, the orientation angle information of each object is solved successively; the mathematical expression of the current pixel deviation
PNG
media_image45.png
27
125
media_image45.png
Greyscale
is:
PNG
media_image46.png
27
596
media_image46.png
Greyscale
PNG
media_image47.png
23
120
media_image47.png
Greyscale
represents a pixel value in the center of the field of view,
PNG
media_image48.png
23
116
media_image48.png
Greyscale
represents an average position of the left and right boundaries of the object to be detected in the image, and the mathematical expression of the given value of the current rotation speed ω obtained by the PID algorithm is:
PNG
media_image49.png
42
683
media_image49.png
Greyscale
when the object to be detected is placed in the center of the field of view, record the orientation angle @P of the robot head at this time, then the direction angle of the j-th object in the i-th scene relative to the robot before rotation,
PNG
media_image50.png
30
122
media_image50.png
Greyscale
, at the same time, the distance dij between the robot and object to be measured is obtained by the depth camera; through the above operations, the orientation angle and distance information of the j-th object relative to the robot at the current moment can be obtained;
after the information of all objects in the current scene is obtained, the head- direction angle of the robot is rotated to
PNG
media_image51.png
33
34
media_image51.png
Greyscale
again to continue the exploration and cognition in the environment;
step 5 further comprises the following steps:
S5.1 through a similar scene measurement algorithm, establish a topological connection relationship between cognitive nodes with similar scenario information, so as to expand the topological connection relationship between adjacent cognitive nodes;
S5.2 use the topological relationship among all cognitive nodes to correct the cumulative error of the head-direction angle and position of the mobile robot during the exploration process, and construct a topological cognitive map;
S5.3 calculate the position of environmental objects in the physical coordinate system and calibrate them in the topological map to realize the construction of the environmental episodic cognitive map;
a specific algorithm for measuring similar scenes is as follows:
set two cognitive nodes ea and eb, first judge whether the number of objects in the two scenarios is the same and whether the attributes of the corresponding objects are consistent, if one of the above conditions is not satisfied, it is judged that the two scenarios do not match; otherwise, by measuring whether the orientation angle information of each object in the scenario is consistent, the mathematical expression of the measurement function
PNG
media_image52.png
24
92
media_image52.png
Greyscale
is:
PNG
media_image53.png
71
603
media_image53.png
Greyscale
in formula (19),
PNG
media_image54.png
31
105
media_image54.png
Greyscale
represent weights of direction information and distance information respectively,
PNG
media_image55.png
22
75
media_image55.png
Greyscale
= 1, set a matching threshold as St, and select an appropriate value according to the actual situation; when a value of the metric function is less than the matching threshold, it is judged that the two scenes match, and at this time the topological relationship between cognitive nodes ea and eb is established;
S5.2 specifically includes:
it is known that the current cognitive node is ei, and the cognitive node associated with it is ek; this represents that there is a topological relationship between node ei and node ek; then the mathematical expression of the pose correction of cognitive nodes ei and ek is as follows:
firstly, calculate the change amount of
PNG
media_image56.png
23
200
media_image56.png
Greyscale
of the cognitive nodes, which is shown in formula (20);
PNG
media_image57.png
146
601
media_image57.png
Greyscale
in formula (20),
PNG
media_image58.png
29
243
media_image58.png
Greyscale
represent the horizontal and vertical coordinates of the place field's center corresponding to the cognitive points ei and ek respectively, dik represents the distance between the place field's center corresponding to the cognitive point ei and ek ,
PNG
media_image59.png
39
110
media_image59.png
Greyscale
respectively represents the head-direction angles at cognitive points ei and ek; after the change amount is obtained, the corrected node parameters can be iteratively calculated step by step according to the change amount, and the relevant mathematical expressions are shown in formula (21) and (22); in formula (21) and (22), t and t + 1 represent the time before and after each iterative operation, respectively, and δ represents the correction rate of the cumulative error;
PNG
media_image60.png
180
583
media_image60.png
Greyscale
a map convergence criterion algorithm is added after S5.2, to improve the real-time performance of the map construction process, define the map convergence at time t as Δd(t), and its mathematical expression is as follows:
PNG
media_image61.png
76
692
media_image61.png
Greyscale
in formula (23), nsum represents the total number of current cognitive nodes, and ni represents the number of nodes associated with cognitive node i; set the scale factor of the convergence criterion is σ when Δd(t) - Δd(t + 1) < σΔd(t + 1), it is judged that there is no need to continue the map update iteration at this time; otherwise, continue to perform the update iteration of cognitive map construction;
the specific steps in step S5.3 are as follows:
after obtaining the topological cognitive map and scenario information of the environment, the two can be integrated to obtain the episodic cognitive map of the environment, the specific method is as follows: according to the position of the robot in physical coordinate system and the orientation angle and distance information of the object relative to the robot obtained above, the positions of all objects in the physical coordinate system can be calculated; insert each object in physical coordinate system containing the topological map according to its attributes and position information to obtain the episodic cognitive map of the environment representation.
For the following reason, the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of “collecting …,” and “inputting …,” the examiner submits that this limitation is insignificant extra-solution activities that merely use a computer to perform the collection of data. In particular, the “collecting …,” and “inputting …,” steps are recited at a high level of generality (i.e. as a general means of gathering robot data for use in the calculating step), and amounts to mere data gathering, which is a form of insignificant extra-solution activity.
Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “collecting …,” and “inputting …,” amounts to nothing more than applying the exception using a generic computer component. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations of “collecting …,” and “inputting …,” the examiner submits that these limitations are insignificant extra-solution activities.
Furthermore, a hardware, computer, wireless keyboard, memory, camera gyroscope, and etc. are all the generic computer parts widely available in the market. Hence, the additional limitations are utilizing generic computer and electronics for the performing the functions which does not add significant more to the abstract idea.
Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well- understood, routine, conventional activity in the field. The additional limitations of “collecting …,” and “inputting …,” are well-understood, routine, and conventional activities because the background recites that the sensors are all conventional sensors mounted on the robot, and the specification does not provide any indication that the processor of robot is anything other than a conventional computer within a robot. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere communication of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner.
Therefore, claim 1 is ineligible under 35 U.S.C. §101.
Allowable Subject Matter
Claim 1 is allowed over the records of prior arts.
The following is an examiner’s statement of reasons for allowance:
The most remarkable prior arts are Yu et al. (CN 110210462 A), Yu et al. (CN 106949896 B), Chelian et al. (US 8762305 B1), and Edelmam et. Al. (WO 2006019791 A2).
The prior arts teaches episodic memory model, hippocampal cognitive mechanism utilizing rat brain, a robot navigating an environment, finding visual pathway, using different sensors such as camera, entorhinal skin layers-hippocampal neural network and etc. However, none of prior arts, individually or in combination, teaches the steps involving specific equations of claim 1.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.”
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IG T AN whose telephone number is (571)270-5110. The examiner can normally be reached M - F: 10:00AM- 4:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IG T AN/Primary Examiner, Art Unit 3662
IG T AN
Primary Examiner
Art Unit 3662