Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are all limitations recited within the claims in the form “X unit configured to Y” (for example, “meeting point information acquisition unit configured to acquire meeting point information”), wherein the function performed by the unit amounts to the pairing of functional language with a generic placeholder in claims 1-10. The corresponding structure is described at, for example, paragraphs [0045]-[0052].
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2 and 5-8 are rejected under 35 U.S.C. 103 as being unpatentable over Khadloya et al. (US PG Pub 20190259284, hereafter referred to as Khadloya) in view of Oko et al. (Japanese PG Pub 2020126304, hereafter referred to as Oko).
Regarding the interpretation of claims 1 through 10 under 35 U.S.C. 112(f), the Examiner is regarding references to “units” as different segments/modules of code within an overall algorithm, all of which have their own specific purpose but are all contained within memory and executed by a processor, wherein the execution by a processor causes the computer (processor + non-transitory computer readable medium) to execute these respective portions of code.
Regarding claim 1, Khadloya discloses a region-of-interest detection apparatus comprising:
a region-of-interest detection unit configured to detect a region of interest from a first image obtained by a camera, mounted on a first movable body, photographing an area in a traveling direction of the first movable body (para. 0046, wherein the region of interest is determined by the processor circuit and corresponds to a direction of travel observed by an image sensor and fed in from an input unit; additionally, the FOV and description of the image sensor in an embodiment are described in para. 0054 and element 107 of Fig. 1);
and a meeting point information acquisition unit configured to acquire meeting point information indicating a meeting point where a second movable body different from the first movable body is capable of meeting the first movable body (paras. 0047-0050, wherein the meeting point information acquisition unit comprises the processor circuit and two CNNs, the first of which is utilized to generate enclosures/bounding polygons around objects of interest in a captured scene, and the second of which is utilized to act as a “sanity check” in the case that the second moving body is a pedestrian whose movement needs to be tracked, and wherein the processor circuit is further configured to detect coordinate changes of the vehicle and the pedestrian/moving object to determine whether a collision is imminent depending on the potential intersection of paths),
the second movable body traveling from a direction intersecting with a traveling route of the first movable body toward the traveling route (para. 0050, “Further, the processor circuit can determine whether the object is moving toward the vehicle, or moving toward a path that will intersect with a direction of travel of the vehicle”);
Specifically, Khadloya discloses a method of driver and pedestrian safety comprising utilizing an image sensor, a processor circuit, and a pair of CNNs for object detection, bounding, and classification to detect pedestrians, road hazards, and potentially hazardous road conditions.
Khadloya does not disclose a region-of-interest addition unit configured to perform, based on the meeting point information, addition processing of adding a region including the meeting point in the first image to the region of interest.
However, Oko discloses a region-of-interest addition unit configured to perform, based on the meeting point information (para. 0016 for an embodiment of meeting point information, wherein the information is a location and distance to a target; in combination with the disclosure of Khadloya, an ordinarily skilled artisan would know to integrate the estimated intersection point of Khadloya as the origin point of the addition processing), addition processing of adding a region including the meeting point in the first image to the region of interest (paras. 0040-0054 and Figs. 5 and 13-15, wherein the detection unit performing exterior object detection first calculates end coordinate positions of the original bounding area of the attention region before relying on a decision process to decide whether or not to expand the attention regions’ boundaries based on the presence of an object of interest in both the x and y directions, and wherein the shift and addition of this region can be seen in Figs. 13-15).
Specifically, Oko discloses a method and system of object detection in the exterior surroundings of a car, including expanding the area of surveillance upon detection of an object for efficient detection over time.
Therefore, both Khadloya and Oko disclose methods and systems of detection of identifying regions of interest utilizing a mounted camera in the exterior of a vehicle, wherein the detection of objects and object movement and the calculation of potential intersection points is a key feature.
Thus, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the ROI addition processing of Oko within the method and system of Khadloya as the application of a known method to a known device ready for improvement to yield a predictable result; specifically, the ROI addition processing of Oko would have led to more robust object tracking and classification within the method and system of Khadloya, as well as a more accurate calculation of potential future intersection points as a consequence of a larger area being observed between the intersection point and the second object in motion en route to the intersection point.
Claim 6 is rejected, mutatis mutandis, for reasons similar to claim 1.
Claim 7 is rejected, mutatis mutandis, for reasons similar to claim 1. Khadloya further discloses a non-transitory computer-readable medium storing a computer program (paras. 0069 and 0071).
Regarding claim 2, Khadloya and Oko disclose all limitations of claim 1. Khadloya further discloses wherein the meeting point information acquisition unit is configured to generate the meeting point information by detecting the meeting point from the first image (paras. 0011-0015 and paras. 0047-0050, wherein the meeting point information acquisition is performed by analyzing each image in a stream of images, and wherein multiple images are not required to detect meeting points from the images).
Regarding claims 5 and 8, Khadloya and Oko disclose all limitations of claims 1 and 2, respectively. Khadloya further discloses wherein the meeting point information update unit is configured to update, based on the first image, the meeting point information (paras. 0047-0050, wherein the meeting point information acquisition is performed by analyzing each image in a stream of images, and wherein multiple images are not required to detect meeting points from the images).
Khadloya does not disclose wherein the region-of-interest addition unit is configured to perform, in response to update of the meeting point information, the addition processing.
However, Oko discloses wherein the region-of-interest addition unit is configured to perform, in response to update of the meeting point information, the addition processing (paras. 0040-0054 and Figs. 5 and 13-15, wherein the detection unit performing exterior object detection first calculates end coordinate positions of the original bounding area of the attention region before relying on a decision process to decide whether or not to expand the attention regions’ boundaries based on the presence of an object of interest in both the x and y directions, and wherein the shift and addition of this region can be seen in Figs. 13-15).
Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the disclosures of Khadloya and Oko according to the rationale of claim 1.
Claims 3-4 and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Khadloya in view of Oko and in further view of Shimizu (Japanese PG Pub 2019053436).
Regarding claim 3, Khadloya and Oko disclose all limitations of claim 1. Khadloya and Oko do not disclose wherein the meeting point information acquisition unit is configured to acquire, based on a position of the first movable body, the meeting point information from an external apparatus outside the first movable body.
However, Shimizu discloses wherein the meeting point information acquisition unit is configured to acquire, based on a position of the first movable body, the meeting point information from an external apparatus outside the first movable body (para. 0102, “the control part 411 specifies the condition of the point located ahead of the advancing direction of the vehicle 50 based on the vehicle information and external information acquired at S20 (S21)”).
Specifically, Shimizu discloses a driver load calculation method and system to detect and report the physical and psychological load on a driver based on information gathered from the surroundings.
Therefore, Shimizu, in addition to Khadloya and Oko, discloses an apparatus enabling collection of information, including meeting point information, (in the case of Shimizu, from an external apparatus outside the first movable body). Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have utilized the disclosure of Shimizu with respect to collection and utilization of external information from an external apparatus within the method of Khadloya as modified by Oko as the use of a known technique to a known method to yield the predictable improvement of additional information for more accurate calculation of intersection points for ROI calculation and expansion.
Regarding claim 4, Khadloya and Oko disclose all limitations of claim 1. Khadloya further discloses wherein the meeting point information acquisition unit is configured to generate the meeting point information by detecting the meeting point from a second image obtained by photographing the area in the traveling direction of the first movable body (paras. 0011-0015 and paras. 0047-0050, wherein the meeting point information acquisition is performed by analyzing each image in a stream of images, and wherein the second image may be the second image of any two adjacent images, as multiple images are not required to detect meeting points from the images).
Khadloya and Oko do not disclose where the image is acquired based on a position of the first movable body from an external apparatus outside the first movable body.
However, Shimizu discloses wherein the image is acquired based on a position of the first movable body from an external apparatus outside the first movable body (paras. 0101-0104, detailing the process of capturing information from images and vehicle information, including position, speed, and direction of travel, gathered from an external apparatus outside of the vehicle in question)
Regarding claim 9, Khadloya and Oko and Shimizu disclose all limitations of claim 3. Khadloya further discloses a meeting point information update unit configured to update, based on the first image, the meeting point information (paras. 0011-0015 and paras. 0047-0050, wherein the meeting point information acquisition is performed by analyzing each image in a stream of images, and wherein multiple images are not required to detect meeting points from the images),
Khadloya does not disclose wherein the region-of-interest addition unit is configured to perform, in response to update of the meeting point information, the addition processing.
However, Oko discloses wherein the region-of-interest addition unit is configured to perform, in response to update of the meeting point information, the addition processing (paras. 0040-0054 and Figs. 5 and 13-15, wherein the detection unit performing exterior object detection first calculates end coordinate positions of the original bounding area of the attention region before relying on a decision process to decide whether or not to expand the attention regions’ boundaries based on the presence of an object of interest in both the x and y directions, and wherein the shift and addition of this region can be seen in Figs. 13-15).
Thus, it would have been obvious to combine the disclosure of Oko within the method and system of Khadloya as modified by Oko and further modified by Shimizu according to the rationale of claim 3.
Regarding claim 10, Khadloya and Oko and Shimizu disclose all limitations of claim 3. Khadloya further discloses a meeting point information update unit configured to update, based on the second image, the meeting point information(),
Khadloya does not disclose wherein the region-of-interest addition unit is configured to perform, in response to update of the meeting point information, the addition processing.
However, Oko discloses wherein the region-of-interest addition unit is configured to perform, in response to update of the meeting point information, the addition processing (paras. 0040-0054 and Figs. 5 and 13-15, wherein the detection unit performing exterior object detection first calculates end coordinate positions of the original bounding area of the attention region before relying on a decision process to decide whether or not to expand the attention regions’ boundaries based on the presence of an object of interest in both the x and y directions, and wherein the shift and addition of this region can be seen in Figs. 13-15).
Thus, it would have been obvious to combine the disclosure of Oko within the method and system of Khadloya as modified by Oko and further modified by Shimizu according to the rationale of claim 3.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROHAN TEJAS MUKUNDHAN whose telephone number is (571)272-2368. The examiner can normally be reached Monday - Friday 9AM - 6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 5712723838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROHAN TEJAS MUKUNDHAN/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698