DETAILED ACTION
This action is in response to the application filed on November 22, 2023. Claims 1-16 are pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-8 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement.
Independent claim 1 contains the limitation, “a second device.” Subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Examiner has read through Paragraph [0008], there is not support for the limitations of “a second device.” The dependent claims do not alleviate the issues of the independent claim and are also rejected under 35 U.S.C. 112(a).
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter.
The limitations of independent claim 1, includes “a second time that is before the first time.” The limitation is interpretated as there is a first resolution image associated with a first time and a second resolution image generated from the first resolution image at the first time. The time appearance information is determined based on the first resolution feature and the second resolution at a first time. It is unclear given the current limitations where, time appearance information of “a second time that is before the first time” is determined from the first resolution feature the second resolution feature. The dependent claims do not alleviate the issues of the independent claim and are also rejected under 35 U.S.C. 112(b).
The limitations of independent claim 1, includes “a second device.” The limitation is interpreted as there is a device that comprises of the limitations of claim 1. The device in claim one, does not state “a first device,” therefore, it is unclear given the current limitations where “a second device” is used in the claim limitations. The dependent claims do not alleviate the issues of the independent claim and are also rejected under 35 U.S.C. 112(b).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7-9, and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al, US 11157768 in view of Yasui et al, US 20220319147.
Regarding claim 1, Levinson teaches a device comprising:
a processor; and memory storing instructions that, when executed by the processor, cause the device to (see Levinson, Col 11, Lines 62-66, “one or more processors 716 and memory 718 communicatively coupled with the one or more processors 716”):
receive a first resolution image associated with a first time and comprising an interest object (see Levinson, Fig. 1 and Col 4, Lines 44-47, “sensor data include sensor data 102 (e.g., associated with a first resolution)” );
generate, based on the first resolution image, a second resolution image (see Levinson, Fig. 1, Col 4, Lines 45-48, “sensor data 104 (e.g., associated with a second resolution),” and Col 5, Lines 9-12, “the sensor data 104 can be generated based on the sensor data 102”);
determine, based on the second resolution image, a second resolution f(see Levinson, Fig. 2, Col 10, Lines 28-33, “the output 206 can identify portion(s) or region(s) of the sensor data 204 and/or a data level associated with such portion(s) or region(s),” the data level associated regions identified is considered to be a second resolution feature of the object of interest);
output, based on the first resolution feature and the second resolution feature, time appearance information indicating an appearance of the interest object associated with the first time (see Levinson, Col 12, Lines 48-61, “the perception component 722 can provide processed sensor data that indicates one or more characteristics associated with a detected entity (e.g., a tracked object) and/or the environment in which the entity is positioned. In some examples, characteristics associated with an entity can include, but are not limited to, an x-position (global and/or local position), a y-position (global and/or local position), a z-position (global and/or local position), an orientation (e.g., a roll, pitch, yaw), an entity type (e.g., a classification), a velocity of the entity, an acceleration of the entity, an extent of the entity (size), etc. Characteristics associated with the environment can include, but are not limited to, a presence of another entity in the environment, a state of another entity in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc,” coordinates of the detected entity (tracked object), velocity of the entity, time of day are all considered to be time appearance information of the interest object );
track an operation state of the interest object based on the time appearance information and past time appearance information, wherein the past time appearance information indicates an appearance of the interest object associated with a second time that is before the first time (see Levinson, Col 12, Lines 59-65, “Characteristics associated with the environment can include, but are not limited to, a presence of another entity in the environment, a state of another entity in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc”);
and output, based on the tracked operation state of the interest object, a signal to control operation of a second device (see Levinson, Col 21, Lines, 30-40, “At operation 816, the process can include controlling a vehicle based at least in part on the first information and the second information. In some instances, the operation 816 can include generating a trajectory to stop the vehicle or to otherwise control the vehicle to safely traverse the environment. In some examples, the operation 816 can include modifying a candidate trajectory based on detected objects, for example, to determine a modified trajectory for the vehicle to follow in the environment”).
Levinson does not expressively teach
a second resolution feature of the interest object;
determine, based on the second resolution feature, interest object information comprising location information of the interest object;
determine, based on the location information, a first resolution feature associated with a region of the first resolution image;
However, Yasui in a similar invention in the same field of endeavor teaches
a second resolution feature of the interest object (see Yasui, Fig. 2, and Paragraph [0028], “The extractor 150 includes a feature amount difference calculator 152,” the extractor extracts features from the low-resolution image, and Paragraph [0033], “to extract a point of interest (a point that is discontinuous with the surroundings in FIG. 2),” a point of interest is an interest object);
determine, based on the second resolution feature, interest object information comprising location information of the interest object (see Yasui, Fig. 3, and Paragraph [0035], “the mask area determiner 130 extracts edge points in a right and left direction in a low-resolution image, and detects a position in an image of a road lane marking, a road shoulder, or the like (white line, traveling road boundary) by connecting the edge points arranged in a straight line,” a position is considered to be location information);
determine, based on the location information, a first resolution feature associated with a region of the first resolution image (see Yasui, Paragraph [0034], “The high-resolution processor 170 cuts out a portion corresponding to the point of interest in the captured image (synchronous cutting in FIG. 2), performs high-resolution processing on it, and determines whether an object on a road is an object that a vehicle needs to avoid contact with,” to determine an object category, features have to be extracted);
The combination of Levinson and Yasui are analogous art because they are both in the same field of endeavor of object detection. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to extract features and calculate the difference, detect position of object in an image, and determine object category as taught in the object detection device of Yasui in the process of Levinson to improve a robustness of detection performance against a variation in size of an object reflected in the point of interest (see Yasui, Paragraph [0015]).
Regarding claim 7, Levinson in view of Yasui further teaches the device of claim 1,
wherein the instructions, when executed by the processor, cause the device to: acquire the second resolution image by adjusting a resolution of the first resolution image (see Levinson, Col 6, Lines 35-40, “The sensor data 202 can be down sampled, compressed, or otherwise manipulated to generate sensor data 204 (e.g., image data associated with a second resolution that is less than the first resolution)”).
The rationale of claim 1 has been applied herein.
Regarding claim 8, Levinson in view of Yasui further teaches the device of claim 1,
wherein the instructions, when executed by the processor, cause the device to: determine, based on the location information and based on a ratio of a size of the second resolution image to a size of the first resolution image, the region of the first resolution image (see, Levinson, Col 2, Lines 37-43, “In some cases, image data can be captured at a “raw” or uncompressed level and can be compressed to a particular compression level (e.g., represented as a compression ratio between an uncompressed size (or first size) and a compressed size)”).
The rationale of claim 1 has been applied herein.
As per claim 9, Claim 9 claims a method comprising the same limitations as Claim 1. Therefore, the rejection and rationale are analogous to that made in Claim 1.
As per claim 15, Claim 15 claims the same limitations as Claim 7 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 7.
As per claim 16, Claim 16 claims the same limitations as Claim 8 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 8.
Claim(s) 2 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al, US 11157768 in view of Yasui et al, US 20220319147 in further view of Hashimoto et al, US 20210295058.
Regarding claim 2, Levinson in view of Yasui does not expressively teach
wherein the instructions, when executed by the processor, cause the device to output the time appearance information by applying a pooling operation to the first resolution feature and the second resolution feature.
However, Hashimoto in a similar invention in the same field of endeavor teaches
wherein the instructions, when executed by the processor, cause the device to output the time appearance information by applying a pooling operation to the first resolution feature and the second resolution feature (Hashimoto, Paragraph [0048], “for example, the pooling layer included in the main part of the first classifier calculates a feature map with a resolution lower than an inputted image, this low-resolution feature map may be outputted to the state identifying unit 34. Additionally, the multiple feature maps of different resolutions calculated by the main part of the first classifier may be outputted to the state identifying unit 34”).
The combination of Levinson Yasui, and Hashimoto are analogous art because they are all in the same field of endeavor of object detection. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, for a pooling layer to be included in the main part of a classifier as taught in the apparatus for identifying the state of an object of Hashimoto in the process of Levinson in view of Yasui to identify the state of an object represented in an image (see Hashimoto, Paragraph [0006]).
As per claim 10, Claim 10 claims the same limitations as Claim 2 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 2.
Claim(s) 3-6, and 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Levinson et al, US 11157768 in view of Yasui et al, US 20220319147 in further view of Yoshimura et al, US 20230342951.
Regarding claim 3, Levinson in view of Yasui does not expressively teach
wherein the instructions, when executed by the processor, cause the device to: determine, based on similarity between the time appearance information and the past time appearance information, whether the interest object is the same as a previously recognized object at the second time;
and track, based on the determination that the interest object is the same, the operation state of the interest object.
However, Yoshimura in a similar invention in the same field of endeavor teaches
wherein the instructions, when executed by the processor, cause the device to: determine, based on similarity between the time appearance information and the past time appearance information, whether the interest object is the same as a previously recognized object at the second time (Yoshimura, Fig. 7, and Paragraph [0078], “In step S27-2 of FIG. 7, the identification section 14A calculates appearance similarity between an object included in the corresponding high evaluation object region and the corresponding tracking target”);
and track, based on the determination that the interest object is the same, the operation state of the interest object (see Yoshimura, Paragraph [0074], “Step S27 of FIG. 4 is carried out in a case where it has been determined to be Yes in step S25. In step S27, the identification section 14A carries out a first correspondence identification process. The first correspondence identification process is a process of identifying a correspondence between each of tracking targets and a high evaluation object region,” and Paragraph [0151], “specific examples of such other types of similarity include similarity that is based on a moving speed, a feature point, a size, or a position in a three-dimensional space of each of an object region and a tracking target region, and the like,” a moving speed, a position, etc are considered to be operation states of the interest object; the correspondence of the evaluation object and the tracking target is considered to be determine whether the interest object is the same as the previously recognized object).
The combination of Levinson Yasui, and Yoshimura are analogous art because they are all in the same field of endeavor of object detection. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to calculate appearance similarity, correspondence identification, and cosine similarity as taught in the object tracking apparatus of Yoshimura in the process of Levinson in view of Yasui to further improve accuracy in tracking a tracking target in an image sequence (see Yoshimura, Paragraph [0010]).
Regarding claim 4, Levinson in view of Yasui in further view of Yoshimura further teaches the device of claim 3,
wherein the instructions, when executed by the processor, cause the device to: determine, based on cosine similarity between the time appearance information and the past time appearance information, whether the interest object is the same as the previously recognized object (see Yoshimura, Paragraph [0078], “the identification section 14A calculates, as the appearance similarity, cosine similarity between the appearance feature extracted in step S27-1 and the appearance feature of the corresponding tracking target stored in the tracking target information 21”).
The rationale of claim 3 has been applied herein.
Regarding claim 5, Levinson in view of Yasui in further view of Yoshimura further teaches the device of claim 3,
wherein the instructions, when executed by the processor, cause the device to: determine whether the interest object is the same as the previously recognized object, based on at least one of an intersection over union (IoU), a Mahalanobis distance, or cosine similarity (see Yoshimura, Paragraph [0078], “In step S27-2 of FIG. 7, the identification section 14A calculates appearance similarity between an object included in the corresponding high evaluation object region and the corresponding tracking target. For example, the identification section 14A calculates, as the appearance similarity, cosine similarity between the appearance feature extracted in step S27-1 and the appearance feature of the corresponding tracking target stored in the tracking target information 21,” and Paragraph [0080], “In step S27-3 of FIG. 7, the identification section 14A calculates IoU between the corresponding high evaluation object region and a tracking target region associated with the corresponding tracking target,” the correspondence of the evaluation object and the tracking target is considered to be determine whether the interest object is the same as the previously recognized object).
The rationale of claim 3 has been applied herein.
Regarding claim 6, Levinson in view of Yasui in further view of Yoshimura further teaches the device of claim 3,
wherein the instructions, when executed by the processor, cause the device to: allocate a new identifier (ID) to the interest object based on the interest object being different from the previously recognized object or based on the interest object not being previously recognized (see Yoshimura, Fig. 5, Paragraph [0104], “For example, in the example of FIG. 5, a correspondence with any of the tracking targets ID1 and ID2 has not been identified for the high evaluation object region d3 in the frame f2. Then, the management section 15A gives a tracking ID3 to the object obj3 included in the high evaluation object region d3”);
and track, based on the new ID, the operation state of the interest object (Yoshimura, Paragraph [0104], “. As the appearance feature of “v3”, an appearance feature extracted from the object region d3 in step S27-1 is applicable”).
As per claim 11, Claim 11 claims the same limitations as Claim 3 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 3.
As per claim 12, Claim 12 claims the same limitations as Claim 4 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 4.
As per claim 13, Claim 13 claims the same limitations as Claim 5 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 5.
As per claim 14, Claim 14 claims the same limitations as Claim 6 and is dependent on a similarly rejected dependent claim. Therefore, the rejection and rationale is analogous to that made in Claim 6.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOMINIQUE JAMES whose telephone number is (703)756-1655. The examiner can normally be reached 9:00 am - 6:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DOMINIQUE JAMES/Examiner, Art Unit 2666
/MING Y HON/Primary Examiner, Art Unit 2666