DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-16 recite “a computer readable medium”.
The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter.
A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only the statutory embodiments to avoid a rejection under 35 U.S.C. 101 by adding the limitation "non-transitory" to the claim.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-17 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
1st it is unclear what Applicant means by “data pertaining to a set of reflected signals”. It is important to note that we are dealing with neural network training and therefore inputs to the neural network need to be defined. Neural network can be presented as A->X->B transformation where A and B are input parameters and X is neural network structure(transfer function) which includes multiple parameters which need to be learned in order to convert A into B. Applicant specifies as inputs A and B only “Antenna locations” and “data pertaining to a set of reflected signals” but does not clarify what are the “data pertaining to a set of reflected signals”. For example “data pertaining to a set of reflected signals” can be received signals by the antennas , but that information is not enough to do anything, or it can be the “time of arrival”, but without the time of transmission it will not do anything at all.
2nd it is unclear what is “a set of learned antenna locations”, how do you get them from training procedure. Training does not learn “a set of learned antenna locations” as locations are input to training procedure. Training according to claims uses locations and data to get intermediate neural network parameters.
Due to the 1st and 2nd it is unclear what are the inputs to the training procedure and what are the outputs of training procedure, what is the structure of the neural network and how “a set of learned antenna locations” can be a learned parameter of the training procedure.
3rd it is unclear what “a set of learned antenna locations” represent, is it location of the antennas predicted by the motion of the antenna, is it position of the antenna predicted by neural network, is it antennas which are not in set of antennas, is it just a reflector in a FOV? In the claims the learned antennas are being interpreted as a reflector in FOV.
Claims 17 and 20 includes limitation “a data processor configured to receive data pertaining to said at least one reflected signal, to feed said data to a trained machine learning procedure which is specific to said non-uniform distribution, and to receive from an output layer of said trained machine learning procedure a reconstruction of said scene”, but output layer does not reconstruct any scene(A->X->B X- here is output layer). In order to reconstruct the scene one needs to apply Parameter A to the layer again and reconstruct X’, but this is not claimed, from claims it seems that scene is reconstructed from the layer itself. In the claims it is interpreted that the scene is reconstructed from the layer.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-10, 12, 14-16, 17, 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by D1 US 20180293453 A1.
Regarding claims bellow D1 teaches
1, 16. A method of designing a radar, comprising:
receiving data pertaining to a set of reflected signals received from a distribution of objects by a respective set of receiving antennas at a respective set of locations;[0033] with [0030, 0024](at each instance tn antenna can be considered as different antenna at different position especially when positions depending on speed of vehicle can be non uniform )
feeding said data and said locations as training data to a machine learning procedure[0033] simultaneously calculating a set of learned antenna locations(any reflective object can be considered learned antenna or obtaining the image Infront of airplane can include the location where the airplane will be soon, due to the indefiniteness issues many interpretations are possible) (positions are calculated using gps input to the neural network which trains and outputs parameters is another posibility )and a set of learned parameters associating said signals with said objects[0033](reconstruction at least portion of the image), to provide a trained machine learning procedure parametrized by said set of learned parameters; [0033]and
storing in a computer readable medium, said set of learned antenna locations separately(inherent as the same memory bit cannot store both parameters) from said trained machine learning procedure.[0030-33]
2. The method according to claim 1, wherein a number of learned antenna locations is less than a number of said receiving antennas. (according to the Examiner interpretation the learned antenna is object which reflects back the signal and it solely depend on the environment, for example moving airplane scanning in front may have only one target while number of the detection points it activated the detection can be hundreds. )
3. The method according to claim 2, wherein said set of learned antenna locations is a subset of said respective set of locations. [0030-0033]( “reconstruct at least a portion of one of the one or more images” which means reconstructing the portion of the image where vehicle with radar will appear next moment or soon. for example parking garage where vehicle is heading)
4. The method according to claim 2, wherein said set of learned antenna locations comprises at least one learned antenna location that is not a member of said respective set of locations. ([0033] vehicle is moving and scanning surrounding and the object detected is out of the road where vehicle is moving)
5. The method according to claim 1, wherein said set of learned parameters comprises parameters employed by said trained machine learning procedure to reconstruct a scene containing said objects. [0033]
6. The method according to claim 1, wherein said set of learned parameters comprises parameters employed by said trained machine learning procedure to reconstruct an image of a scene containing said objects. [0033]
7. The method according to claim 1, wherein said set of learned parameters comprises parameters employed by said trained machine learning procedure to detect presence of said objects. [0033]
8. The method according to claim 1, wherein said set of learned parameters comprises parameters employed by said trained machine learning procedure to determine locations of said objects. [0033][0044]
9. The method according to claim 1, wherein said set of learned parameters comprises parameters employed by said trained machine learning procedure to segment of a scene containing said objects. [0033,0044]
10. The method according to claim 1, wherein said machine learning procedure comprises a sub-sampling layer[0045](one or more layers ), wherein said learned antenna locations(layers thereof may extract nighttime feature information/data corresponding to the identified nighttime features. In an example embodiment, nighttime feature information/data may comprise an array describing the size, three dimensional shape, location, orientation, and/or the like of the nighttime feature) are parameters of said sub-sampling layer, and wherein said trained machine learning procedure is devoid of said sub-sampling layer. [0044-0045]
12. The method according to claim l, comprising training said machine learning procedure to learn at least one acquisition parameter. [0030-0033][0045]
14. A method of constructing a radar, the method comprising: executing the method according to claim 1; and constructing an array of receiving antennas at said set of learned antenna locations, and an array of transmitting antennas at predetermined locations; thereby constructing the radar. (by adding antennas along the movement it creates synthetic antenna array and detecting reflector object and identifying it locations constructing an array of receiving antennas at said set of learned antenna locations[0045])
15. A method of analyzing a scene, the method comprising: receiving signals from the scene using a radar designed according to claim 1; feeding said signals to said trained machine learning procedure; and receiving from said trained machine learning procedure output pertaining to an association of said signals with objects in the scene.[0033]
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 17- 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over D1.
Regarding claims bellow D1 teaches
17, 20 A radar system, comprising:
at least one transmitting antenna for transmitting a signal to a distribution of objects in a scene;[0030-0033]
(claim 20)a set of transmitting antennas distributed non-uniformly over a surface for transmitting a respective set of signals to a distribution of objects in a scene; ( [0030-0033]obvious as car is moving)
a set of receiving antennas distributed non-uniformly over a surface for receiving a respective set of reflected signals from said objects; and( [0030-0033]obvious as car is moving)
a data processor configured to receive data pertaining to said reflected signals, to feed said data to a trained machine learning procedure which is specific to said non- uniform distribution, and to receive from an output layer of said trained machine learning procedure a reconstruction of said scene.[0033]
18. The system according to claim 17, comprising a plurality of transmitting antennas for transmitting a respective plurality of signals to said distribution of object. [0030-0033]
19. The system according to claim 18, wherein said plurality of transmitting antennas are also distributed non-uniformly. [0030-0033]
It would be obvious to one of ordinary skills in the art at the time of the filing to modify invention by D1 to non-uniformly distribute antennas to more accurately detect the objects.
Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over D1 in view of D2 US 20200355817 A1.
Regarding claims 11 and 13 D1 does not teach but D2 teaches
11. The method according to claim 1, wherein said machine learning procedure comprises a beamforming layer having fixed parameters. (fig. 7 we can call beamforming a part of neural network )[0067, 0082]
13. The method according to claim 12, wherein said at least one acquisition parameter is selected from the group consisting of transmitted waveform modulation, and Doppler shift acquisition.[0074]
It would be obvious to one of ordinary skills in the art at the time of the filing to modify invention by D1 with invention by D2 in order to account for performance discrepancies caused by manufacturing variances, hardware performance variances over time or temperature, a current position or orientation of the device .[0067]
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HELENA SERAYDARYAN whose telephone number is (571)270-0706. The examiner can normally be reached on M-T, 7:30-5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Robert Hodge can be reached on (571)272-2097. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HELENA H SERAYDARYAN/ Examiner, Art Unit 3645
/ROBERT W HODGE/Supervisory Patent Examiner, Art Unit 3645