DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 23 October 2025 have been fully considered but they are persuasive only in part.
First, the objections to the claims, being overcome by applicant’s amendments, are withdrawn.
Second, in view of the claim amendments, the previous rejections under 35 U.S.C. 112(b) are withdrawn, with new rejections under 35 U.S.C. 112(a), description requirement, and 35 U.S.C. 112(b) being made herein based on the claim amendments.
In this respect, applicant asserts that, in changing the claim language to “’minimizing/maximizing’ the field of view (FOV)”:
It is apparent that the term "wide angle" is consistently used in the specification to denote the field of view (FOV) of a camera, and that "minimizing" and "maximizing" refer to control operations performed by the sensor unit for adjusting the effective FOV of respective cameras. Moreover, a person skilled in the art would readily understand that such adjustments are technically feasible even without mechanical zoom mechanisms. For example, calibration data pre-stored for each gear stage or driving mode can be used to compensate FOV variations, lane or parking line detection can be employed to estimate distortion and apply correction coefficients, and sensor fusion with IMU or steering angle sensors can further refine the effective FOV. Thus, the adjustment of the "wide angle" (now "FOV") is not limited to mechanical lens movements but may be realized by software-based control and image processing algorithms.
First, applicant’s amendments violate 37 CFR 1.75(d)(1) which indicates:
(d)(1) The claim or claims must conform to the invention as set forth in the remainder of the specification and the terms and phrases used in the claims must find clear support or antecedent basis in the description so that the meaning of the terms in the claims may be ascertainable by reference to the description. (See § 1.58(a).)
Since the term “field of view (FOV)” does not appear in the specification, it is unclear (and not reasonably certain) what the metes and bounds of maximizing or minimizing the FOV of the camera would possibly cover, and it is apparently undescribed by what algorithm(s) such maximizing or minimizing would be effected. Here, applicant has argued that the FOV “adjustment” (NB: no adjustment is presently claimed) is not limited to a mechanical zoom or mechanical lens movements but may be realized by “software-based control and image processing algorithms”. However, if such arguments are intended to set up a “special definition” of what the claim terms would now cover, applicant is reminded that (per MPEP 2173.01, I.), "An applicant may not add a special definition or disavowal after the filing date of the application. However, an applicant may point out or explain in remarks where the specification as filed contains a special definition or disavowal."
Third, because the claims are indefinite and the examiner cannot see in the claim language (e.g., maximizing or minimizing an FOV that might not even need to be adjustable, to read on the claim language) any clear limitation that might integrate the recited abstract idea into a practical application, the examiner repeats in modified form the rejection under 35 U.S.C. 101.
In this respect, applicant argues:
Because 'receiving a selection of one of the at least one detected parking space' cannot possibly be performed in the human mind or by a human using pen and paper, the subject matter of the independent claims are not an abstract idea and therefore do not recite a judicial exception.
The examiner disagrees. If a driver is driving a vehicle and says to a passenger, “I’m going to park over there in that handicapped space so we can get your wheelchair out of the back seat more easily”, the passenger can receive (via her ears and into her consciousness) from the driver the indication that i) the driver has detected the handicapped parking space and ii) selected that handicapped parking space to park his vehicle in.
Next, applicant argues:
Similar considerations apply to the feature "controlling the plurality of cameras by minimizing a field of view (FOV) of a first camera for which a received video is unavailable, and maximizing a FOV of at least one second camera adjacent to the first camera."
While the examiner has characterized the controlling of the cameras as insignificant extra-solution activity, “apply it” like limitations, and/or limitations recited at a high level of generality which are not indicative of integration into a practical application, and does not consider the controlling limitation to effect a transformation of a particular article to a different state or thing, the examiner agrees that the controlling limitation cannot possibly be performed in the human mind or by a human using pen and paper.
Next, applicant argues:
All independent claims recite cameras and ultrasonic sensors and the following (or similar) features: 1) adjusting the FOV of the cameras (under certain circumstances) and 2) "generating parking guide information comprising at least one of a path along which the vehicle is guidable to the selected parking space, a steering angle, or a speed of the vehicle."
Because the features 'adjusting the FOV' and 'generating specific parking guide information' are technical solutions to a technical problem and improve the autonomous parking assist technology because they perform coordinated control of multiple heterogeneous sensors (video cameras and ultrasonic sensors) based on availability determination of received videos. This control ensures system reliability and continuous functionality of the autonomous parking assist system. Moreover, they help to maintain accurate vehicle guidance and parking space detection under sensor failure or degraded visibility.
However, applicant apparently does not claim “adjusting the FOV” or adjusting anything else (e.g., because the claims apparently cover a camera with a non-adjustable FOV, the FOV would apparently be both minimized and maximized at the same time without requiring any transformation of a particular article to a different state) and because the parking guide information could be generated with a pen or pencil and paper as an aid to a mental process (MPEP 2106.04(a)(2), III. and III., B.), the examiner does not find these arguments persuasive.
Fourth, regarding the rejection under 35 U.S.C. 103, applicant’s numerous claim amendments appear to change the claim grammar but not the scope of the claims in any significant and/or definite sense. Accordingly, the rejection is maintained.
In this respect, applicant argues:
Joos does not teach:
1) dynamic FOV control of cameras. Even though Joos mentions cameras, it does not describe any mechanism for dynamically controlling or adjusting the field of view (FOV) of the individual cameras based on video availability.
2) Video availability determination algorithm. While Joos discusses processing sensor data and confidence levels, it doesn't specifically describe determining whether received videos are "available" using a video determination algorithm.
3) Coordinated camera control strategy. Joos does not show minimizing FOV of unavailable cameras while maximizing FOV of adjacent cameras.
4) Sectioned camera/sensor arrangement. Joos doesn't describe cameras and ultrasonic sensors being configured to capture/detect "different sections among sections preset by dividing a periphery of the vehicle."
Even assuming the Office Action is correct and Hayakawa shows 2) and 4), and it is not (in particular it does not show a video availability determination algorithm), Hayakawa still does not show or suggest 1), i.e., any mechanism for dynamically adjusting the field of view (FOV) of cameras and 3) a coordinated camera control strategy. In particular, Hayakawa does not show switching between different processing modes when cameras fail and it doesn't describe the specific strategy of "minimizing FOV of unavailable cameras while maximizing FOV of adjacent cameras."
In response, the examiner notes that applicant does not apparently disclose or clearly claim dynamic FOV control of cameras or a coordinated camera control strategy, so these arguments are not commensurate either with the disclosure or the scope of the claims. Moreover, Hayakawa et al. (‘140) apparently discloses the video availability determination algorithm. That is, Hayakawa et al. (‘140) determines whether a fault has occurred in the respective cameras, and detects a fault in a camera when no signal (e.g., obviously video) is transmitted from the camera, and the video is thus unavailable. Additionally, Hayakawa et al. (‘140) clearly discloses the sectioned camera/sensor arrangement in FIG. 1. As to the argument, “Hayakawa still does not show or suggest 1), i.e., any mechanism for dynamically adjusting the field of view (FOV) of cameras”, if apparently neither does applicant. Accordingly, applicant’s arguments are not persuasive in this respect.
In an attempt to advance prosecution, the examiner believes that outstanding 112 issues related to the FOV(s) in the independent claims in this case, as well as the objection to the specification, could be resolved by changing/amending the following limitations in each of the independent claims:
minimizing a field of view (FOV) of a first camera for which a received video is unavailable, and
maximizing a FOV of at least one second camera adjacent to the first camera;
to read:
when a received video of a first camera is unavailable, adjusting a wide angle of the first camera to minimize the wide angle of the first camera, and
adjusting a wide angle of at least one second camera adjacent to the first camera to maximize the wide angle of the second camera in order to minimize a section in which the received video of the first camera is unavailable;
if such be applicant’s intent.1
Accordingly, applicant’s arguments are only persuasive in part.
Specification
The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: should the applicant choose to not adopt the examiner’s proposed claim language indicated above in the Response to Arguments section, antecedent basis for the following new (independent) claim terminology would be required in the specification: “minimizing a field of view (FOV) of a first camera for which a received video is unavailable, and maximizing a FOV of at least one second camera adjacent to the first camera;”
Claim Objections
Claims 22 and 26 are objected to because of the following informalities: i) in claim 22, line 3, “. . . the vehicle is based on . . .” is apparently grammatically incorrect, and should apparently read, “. . . the vehicle based on . . .”, without the “is”; and ii) in claim 26, line 3, “the the first camera” should read, “the first camera”. Appropriate correction (or reasoned traversal) is required.
While it is not completely clear from the specification what applicant intends by the claim recitation “FOV”, to the extent that applicant intends an acronym, then all recitations of “a FOV” in the claims are grammatically incorrect, and should apparently read, “an FOV”2. Full correction of the “FOV” issue (see also the claim rejections) is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 21 to 35 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Regarding claims 21, 28, and 35, applicant has apparently not previously described a “field of view” or “FOV” of a first or second camera, and has not apparently previously described, in sufficient detail, by what algorithm(s)3, or by what steps or procedure4, he minimized a field of view (FOV) of a first camera for which a received video is unavailable, and maximized a FOV of at least one second camera adjacent to the first camera. No minimizing or maximizing of camera fields of view (FOV), and no algorithm(s) therefor, whether my mechanical zoom or lens movements or by software-based control and image processing algorithms5, are apparently described, in sufficient detail, in the specification. Accordingly, the examiner believes that applicant has not demonstrated, to those skilled in the art, possession of the full scope6 of the now claimed invention, but has only, if anything, now described a desired result.
Claims 21 to 35 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
In claim 21, lines 6ff, and in claim 28, lines 3ff, “receiving videos from a plurality of cameras and ultrasonic data from a plurality of ultrasonic sensors of a vehicle” is indefinite because it is unclear if “of a vehicle” modifies only the ultrasonic sensors or if it also modifies the plurality of cameras.7 This portion of the rejection could be overcome by changing, “receiving videos from a plurality of cameras and ultrasonic data from a plurality of ultrasonic sensors of a vehicle”, to read, “receiving videos from a plurality of cameras of a vehicle and ultrasonic data from a plurality of ultrasonic sensors of the [[a]] vehicle”, if such be applicant’s intent.
In claim 21, lines 11ff, in claim 28, lines 8ff, and in claim 35, lines 11ff, “minimizing a field of view (FOV) of a first camera for which a received video is unavailable, and maximizing a FOV of at least one second camera adjacent to the first camera;” is indefinite and unclear from the teachings of the specification which neither apparently mentions nor clarifies any “field of view (FOV)” of a camera nor indicates that or how (or what it would mean that) such an FOV would be minimized or maximized. In an attempt to advance prosecution, the examiner has suggested language (in the Response to Arguments section, above) that he believes would overcome this portion of the rejection.
Claim(s) depending from claims expressly noted above are also rejected under 35 U.S.C. 112 by/for reason of their dependency from a noted claim that is rejected under 35 U.S.C. 112, for the reasons given.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21, 22, 25 to 29, and 32 to 35 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1 and Step 2A, Prong I:
Claim(s) 21, 22, 25 to 29, and 32 to 35, while (each) reciting a statutory category of invention defined in 35 U.S.C. 101 (a useful process, machine, manufacture, or composition of matter), is/are directed to an abstract idea, which is a judicial exception, the recited abstract idea being that of determining whether each received video is available by using a video determination algorithm, receiving a selection of one of the at least one detected parking space from a driver, and generating parking guide information comprising at least one of a path along which the vehicle is guidable to the selected parking space, a steering angle, or a speed of the vehicle, e.g., by receiving videos from a plurality of cameras and ultrasonic data from a plurality of ultrasonic sensors of a vehicle; determining whether each received video is available by using a video determination algorithm; based on the received videos. controlling the plurality of cameras by: minimizing a field of view (FOV) of a first camera for which a received video is unavailable, and maximizing a FOV of at least one second camera adjacent to the first camera: based on the received videos and the received ultrasonic data, detecting at least one parking space and an object adjacent to the vehicle; receiving a selection of one of the at least one detected parking space from a driver; and generating parking guide information comprising at least one of a path along which the vehicle is guidable to the selected parking space, a steering angle, or a speed of the vehicle, wherein each of the plurality of cameras is configured to capture a different section among sections preset by dividing a periphery of the vehicle, and each of the plurality of ultrasonic sensors is configured to detect a different section among the sections; wherein the operations further comprise detecting the at least one parking space and the object adjacent to the vehicle is based on one or more available videos and the ultrasonic data corresponding to a section where the received video is unavailable; wherein the operations further comprise: generating an image or a video comprising the parking guide information, the vehicle, and the selected parking space, and providing the generated image or the generated video to the driver; wherein the generated image or the generated video further comprises a notification about the first camera; wherein the operations further comprise warning the driver by using at least one of a visual output, an auditory output, or a tactile output based on the object being detected at a preset distance from the vehicle.
This abstract idea falls within the grouping(s) of mathematical concepts, mental processes, and/or certain methods of organizing human activity, distilled from case law, because it could be practically performed in the human mind as a mental process with use of a physical aid such as a pen or pencil and paper. See e.g., MPEP 2106.04(a)(2), III., and 2106.04(a)(2), III., B.
Step 2A, Prong II and Step 2B:
Additionally, applying a preponderance of the evidence standard, the abstract idea is not integrated (e.g., at Step 2A, Prong II) by the recitation of additional elements/limitations into a practical application (using the considerations set forth in MPEP §§ 2106.04(a)-(h)) because merely using a computer as a tool to perform an abstract idea or adding the words "apply it" (e.g., controlling cameras, providing the generated image or video to the driver, warning the driver, etc.) is not integrating the idea into a practical application of the idea, and e.g., looking at the claim as a whole and considering any additional elements/limitations individually and in combination, no (additional) particular machine, transformation, improvement to the functioning of a computer or an existing technological process or technical field, or meaningful application of the idea, beyond generally linking the idea to a technological environment (e.g., "implementation via computers", Alice) or adding insignificant extra-solution activity (e.g., controlling cameras used in their normal capacity or e.g., in an indefinite way to “maximiz[e]” or “minimiz[e]” indefinite FOV view fields that may not be used in/by any solution of the claims in any way), is recited in or encompassed by the claims.
Moreover, applying a preponderance of the evidence standard, the claim(s) does/do not include additional elements/limitations/steps (e.g., at Step 2B) that are, individually or in ordered combination, sufficient to amount to significantly more than the judicial exception because the elements/limitations/steps are recited at a high level of generality (e.g., e.g., the videos received from cameras and the ultrasonic data received from ultrasonic sensors included in a vehicle, controlling cameras used in their normal capacity, providing the generated image or video to the driver, warning the driver, etc.) so as to not favor eligibility (MPEP § 2106.05(d)) and/or are used e.g., for data/information gathering only or for other activities that were well-understood, routine, and conventional activity in the industry, for example (as to the cameras and ultrasonic sensors, etc.) as indicated in applicant's specification at published paragraphs [0003] to [0007], and moreover, the generically recited computer elements (e.g., a processor, a memory, cameras, etc.; see e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 110 USPQ2d 1984 (2014); buySAFE, Inc. v. Google, Inc., 765 F.3d. 1350, 112 USPQ2d 1093 (Fed. Cir. 2014); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 115 USPQ2d 1090 (Fed. Cir. 2015); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1321, 120 USPQ2d 1353, 1362; Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354-1355, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016); TLI Communications LLC V. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (invoking computer server and telephone unit as a tool to perform an existing process, where the telephone unit includes a camera used it its normal capacity); FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 1096 (Fed. Cir. 2016) (“[T]he use of generic computer elements like a microprocessor or user interface do not alone transform an otherwise abstract idea into patent-eligible subject matter.”); Mobile Acuity, Ltd. v. Blippar Ltd., Case No. 22-2216 (Fed. Cir. Aug. 6, 2024); see also the 2019 PEG Advanced Module at pages 89, 145, etc.) do not add a meaningful limitation to the abstract idea because their use would be routine (and conventional) in any computer implementation of the idea.
Moreover, limiting or linking the use of the idea to a particular technological environment (e.g., a vehicle having video cameras and ultrasonic sensors) is not enough to transform the abstract idea into a patent-eligible invention (Flook[8]) e.g., because the preemptive effect of the claims on the idea within the field of use would be broad.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21 to 24, 28 to 31, and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Joos et al. (2019/0371175) in view of Hayakawa et al. (2021/0284140).
Joos et al. (‘175) reveals:
per claim 21, an apparatus comprising:
at least one processor [e.g., FIGS. 1, 4, 9, etc.; e.g., 461, 902, etc.]; and
a memory [e.g., 969 in FIG. 9, the medium in claim 17, etc.; e.g., paragraphs [0113], [0117], etc.] operably coupled to the at least one processor,
wherein the memory stores instructions [e.g., paragraphs [0113], [0117], claim 17, etc.] that, when executed by the at least one processor, cause the at least one processor to perform operations comprising:
receiving videos from a plurality of cameras [e.g., 309; e.g., the plurality of cameras positioned around the vehicle 101 and can be configured to provide a surround-view (obviously video, paragraphs [0039], [0105], etc.) of the vehicle 101 (paragraph [0036]), with the sensor data (e.g., images from the cameras) being updated in real-time (paragraph [0054]) in FIG. 2B, obviously as videos] and ultrasonic data from a plurality of ultrasonic sensors [e.g., 308; e.g., paragraph [0036], “the one or more sensors 105 including ultrasonic sensor(s)”] of the vehicle;
based on the received videos and the received ultrasonic data [e.g., based on the fusion map (232, 532) generated from the vehicle sensors in FIG. 3, including the cameras and ultrasonic sensors)], detecting at least one parking space and an object adjacent to the vehicle [e.g., at 233 in FIG. 2A, at 550 in FIG. 5C; and e.g., paragraphs [0086], [0088], etc.];
receiving a selection of one of the at least one detected parking space from a driver [e.g., executing 234 in FIG. 2A; and paragraph [0041], “Responsive to an indication from the mobile device 115 that a driver selects a specific one of the detected parking spaces . . .”]; and
generating parking guide information comprising at least one of a path along which the vehicle is guidable to the selected parking space [e.g., paragraph [0041], “a corresponding parking trajectory for performing the parking procedure for the designated parking space”], a steering angle [e.g., obvious from the corresponding parking trajectory], or a speed of the vehicle [e.g., paragraph [0062]],
Joos et al. (‘175) may not reveal the arrangements of the cameras and ultrasonic sensors, or that the object is detected, or the manners in which the cameras are controlled when the video(s) is/are determined to be available/unavailable, although the latter limitations (cameras being controlled) are not definitely recited, and Joos et al. (‘175) specifically teaches that availability of the parking spaces should be analyzed after the parking spaces are detected, to ensure that a vehicle (an object) is not currently parked in the detected parking space (paragraph [0079]).
However, in the context/field of an improved parking assist apparatus for a vehicle having a steering unit 50 that controls the traveling direction of the vehicle, Hayakawa et al. (‘140) teaches e.g., at paragraphs [0030] to [0050], [0054], [0068], [0070], [0076], [0081], etc. that sonar sensors 201 to 204[9] and cameras 301 to 304 are disposed at front, rear, and (left and right) side sections of the vehicle to detect parking spaces and obstacles (e.g., parked vehicles) so that traveling of the vehicle on a route to the parking space PS between parked vehicles may be controlled, and that the steering unit 50 provided with a steering sensor is controlled to execute an automatic parking process, and a fault in a camera is detected by a camera control unit 313 when no signal is transmitted from a camera or in the case where an image in a specific region (pixels) is not changed even when the image signal is transmitted. Moreover, when a fault has occurred on the LS camera 303, the main control unit 214 detects the parking space PS by using only the detection signal of the LS sonar 203, and similarly that when the RS camera 304 has a fault, the main control unit executes the second parking process using only the detection signal of the RS sonar in normal state. Additionally, when it is determined that the vehicle may contact an obstacle, it is notified through the display unit 41 that the vehicle is stopped by executing the emergency braking control (paragraph [0083]), and the main control unit 214 controls the display unit 41 to display an image notifying the occupant of a detecting apparatus in a fault state and the execution of the second parking state detecting process (paragraphs [0103], [0115], etc.).
It would have been obvious before the effective filing date of the claimed invention to implement or modify the Joos et al. (‘175) server, method, and medium for automated parking so that the cameras and the ultrasonic (sonar) sensors would have been arranged at sections around the vehicle, as taught by the cameras and sonar sensors in Hayakawa et al. (‘140) in order to detect parking spaces and objects/obstacles adjacent parking spaces (e.g., parked vehicles), so that a fault in a camera would have been detected by a camera control unit 313 when no signal is transmitted from a camera or in the case where an image in a specific region (pixels) is not changed even when the image signal is transmitted, as taught by Hayakawa et al. (‘140), so that, when a fault had occurred at a particular camera so that its video was unavailable (no signal), the parking spaces would have been detected using only the detection signal of a corresponding sonar/ultrasonic sensor, as taught by Hayakawa et al. (‘140), while notifying the occupant of the fault state, etc. through a display unit, as taught by Hayakawa et al. (‘140), and so that steering would have been controlled in accordance with a steering [obviously angle] sensor, as taught by Hayakawa et al. (‘140), in order that parking spaces and obstacles/objects (e.g., parked vehicles) would have been reliably detected at all sides of the vehicle while parking, in order that faults in the cameras would have been compensated for by operation of the corresponding sonar/ultrasonic sensor, as taught by Hayakawa et al. (‘140), with a reasonable expectation of success, and e.g., as a use of a known technique to improve similar devices (methods, or products) in the same way.
As such, the implemented or modified Joos et al. (‘175) server, method, and medium for automated parking would have rendered obvious:
per claim 21, . . . determining whether each of the received videos is available [e.g., when the (obviously video) signal is or is not transmitted from the camera, in Hayakawa et al. (‘140), wherein the camera control unit 313 in Hayakawa et al. (‘140) determines whether a fault has occurred in the respective cameras, and detects a fault in a camera when no signal (e.g., obviously video) is transmitted from the camera, and the video is thus unavailable; and implicit in executing 231 in FIG. 2A of Joos et al. (‘175), by fusing available sensor data (and obviously not fusing data which is not available)] by using a video determination algorithm [e.g., paragraphs [0050], etc. in Hayakawa et al. (‘140)];
based on the received videos [e.g., the images (obviously video) with the changing pixels at paragraph [0050] in Hayakawa et al. (‘140), obviously obtained from the camera(s) 309 in Joos et al. (‘175); e.g., the plurality of cameras positioned around the vehicle 101 that can be configured to provide a surround-view (obviously video, paragraphs [0036], [0105], etc.) of the vehicle 101 (paragraph [0036]), with the sensor data (e.g., images from the cameras) being updated in real-time (paragraph [0054]) in FIG. 2B, obviously as videos], controlling the cameras by:
minimizing a field of view (FOV) of a first camera for which a received video is unavailable [e.g., when no signal is transmitted from the camera (paragraph [0050]) in Hayakawa et al. (‘140) and the angle of the camera from which no signal is transmitted obviously becomes zero], and
maximizing a FOV of at least one second camera adjacent to the first camera [e.g., when a signal (no fault) is transmitted from the camera (paragraph [0050]) in Hayakawa et al. (‘140) which is obviously adjacent to a faulty camera, and the angle of the camera from which a signal (no fault) is transmitted obviously becomes the (maximized) angle of the camera field of view];
based on the received videos and the received ultrasonic data [e.g., in Joos et al. (‘175), based on the fusion map (232, 532) generated from the vehicle sensors in FIG. 3, including the cameras and ultrasonic sensors)], detecting at least one parking space and an object [e.g., the parked vehicles (in FIGS. 5A to 5C) in Hayakawa et al. (‘140), for determining parking space availability in Joos et al. (‘175); and e.g., paragraphs [0086], [0088], etc. in Joos et al. (‘175)] adjacent to the vehicle [e.g., in Joos et al. (‘175), at 233 in FIG. 2A, at 550 in FIG. 5C; and in FIGS. 5A to 5C in Hayakawa et al. (‘140)];
receiving a selection of one of the at least one detected parking space from a driver [e.g., in Joos et al. (‘175), executing 234 in FIG. 2A; and paragraph [0041], “Responsive to an indication from the mobile device 115 that a driver selects a specific one of the detected parking spaces . . .”]; and
generating parking guide information comprising at least one of a path along which the vehicle is guidable to the selected parking space [e.g., paragraph [0041] in Joos et al. (‘175), “a corresponding parking trajectory for performing the parking procedure for the designated parking space”], a steering angle [e.g., obvious from the corresponding parking trajectory in Joos et al. (‘175)], or a speed of the vehicle [e.g., paragraph [0062] in Joos et al. (‘175)],
wherein each of the plurality of cameras [e.g., 309 in Joos et al. (‘175)] is configured to capture a different section among sections preset by dividing a periphery of the vehicle [e.g., as shown (e.g., at R11 to R14) in FIG. 1 of Hayakawa et al. (‘140)], and each of the plurality of ultrasonic sensors [e.g., 308 in Joos et al. (‘175)] is configured to detect a different section among the sections [e.g., as shown (e.g., at R1 to R4) in FIG. 1 of Hayakawa et al. (‘140)],
per claim 22, depending from claim 21, wherein the operations further comprise detecting the at least one parking space and the object adjacent to the vehicle is based on one or more available videos and the ultrasonic data [e.g., paragraphs [0050], [0068], [0070], [0076], [0081], etc. in Hayakawa et al. (‘140), wherein when a fault has occurred on the LS camera 303, the main control unit 214 detects the parking space PS by using only the detection signal of the LS sonar 203, and similarly that when the RS camera 304 has a fault, the main control unit executes the second parking process using only the detection signal of the RS sonar in normal state, with the “only” being understood to substitute the sonar data (only) for the camera and sonar data; and paragraph [0047] of Joos et al. (‘175), “For instance, the fusion map may reflect a combination of distancing data from an ultrasonic sensor and an image acquired at the same point in time by a camera, the combination of these data being fused with positioning data and compared to previously acquired maps of the external environment to determine the vehicle environment”];
per claim 23, depending from claim 22, wherein the operations further comprise:
determining whether a received video taken fora section including the selected parking space is available [e.g., the camera control unit 313 in Hayakawa et al. (‘140) determines whether or not a fault has occurred in the respective cameras, and detects a fault in a camera when no signal (e.g., obviously video) is transmitted from the camera; and implicit in executing 231 in FIG. 2A of Joos et al. (‘175), by fusing available sensor data (and obviously not fusing data which is not available)]; and
controlling the vehicle to perform autonomous parking by using a first remote smart parking assist (RSPA) or a second RSPA [e.g., for remote fleet parking, as taught by Joos et al. (‘175) e.g., at paragraphs [0116], etc.; and using the first parking space detecting process or the second parking space detecting process (and the first/second angle adjusting and parking processes) as shown in FIGS. 6A to 6C of Hayakawa et al. (‘140)] based on availability of the received video taken for the section including the selected parking space [e.g., when there is no fault in a camera as taught by Hayakawa et al. (‘140), and the parking space PS is obviously no longer determined using the sonar alone (e.g., paragraphs [0068], [0070], [0076], [0081], etc.), but rather is detected using the sonar and image signal (e.g., paragraphs [0064], [0069], etc.)];
per claim 24, depending from claim 23, wherein controlling the vehicle comprises:
performing the autonomous parking by using the first RSPA [e.g., for remote fleet parking, as taught by Joos et al. (‘175) e.g., at paragraphs [0116], etc., performed e.g., in the presence of the camera fault(s) taught by Hayakawa et al. (‘140) as shown in FIGS. 6A to 6C] when the received video taken of the section including the received one parking space is unavailable [e.g., when no signal is transmitted from the camera (paragraph [0050]) in Hayakawa et al. (‘140) and the angle of the camera from which no signal is transmitted obviously becomes zero], and
performing the autonomous parking by using the second RSPA [e.g., as indicated by the respective first and second (parking space detecting, angle adjusting, and parking) processes as shown in FIGS. 6A to 6C in Hayakawa et al. (‘140), when there is no camera fault, obviously used for the remote fleet parking in Joos et al. (‘175)] when the video taken of the section including the received one parking space is available [e.g., when a signal (no fault) is transmitted from the camera (paragraph [0050]) in Hayakawa et al. (‘140) which is obviously adjacent to a faulty camera, and the angle of the camera from which a signal (no fault) is transmitted obviously becomes the (maximized) angle of the camera field of view];
per claim 28, a computer-implemented method performed by an autonomous parking assist apparatus, the computer-implemented method comprising:
receiving videos from a plurality of cameras [e.g., in Joos et al. (‘175), 309; e.g., and the plurality of cameras positioned around the vehicle 101 and can be configured to provide a surround-view (obviously video, paragraphs [0039], [0105], etc.) of the vehicle 101 (paragraph [0036]), with the sensor data (e.g., images from the cameras) being updated in real-time (paragraph [0054]) in FIG. 2B, obviously as videos] and ultrasonic data from a plurality of ultrasonic sensors [e.g., in Joos et al. (‘175), 308; and e.g., paragraph [0036], “the one or more sensors 105 including ultrasonic sensor(s)”; and the sonar sensors in Hayakawa et al. (‘140)] of a vehicle;
determining whether each of the received videos is available [e.g., when the (obviously video) signal is or is not transmitted from the camera, in Hayakawa et al. (‘140), wherein the camera control unit 313 in Hayakawa et al. (‘140) determines whether a fault has occurred in the respective cameras, and detects a fault in a camera when no signal (e.g., obviously video) is transmitted from the camera, and the video is thus unavailable; and implicit in executing 231 in FIG. 2A of Joos et al. (‘175), by fusing available sensor data (and obviously not fusing data which is not available)] by using a video determination algorithm [e.g., paragraphs [0050], etc. in Hayakawa et al. (‘140)];
based on the received videos [e.g., the images (obviously video) with the changing pixels at paragraph [0050] in Hayakawa et al. (‘140), obviously obtained from the camera(s) 309 in Joos et al. (‘175); e.g., the plurality of cameras positioned around the vehicle 101 that can be configured to provide a surround-view (obviously video, paragraphs [0036], [0105], etc.) of the vehicle 101 (paragraph [0036]), with the sensor data (e.g., images from the cameras) being updated in real-time (paragraph [0054]) in FIG. 2B, obviously as videos], controlling the cameras by:
minimizing a field of view (FOV) of a first camera for which a received video is unavailable [e.g., when no signal is transmitted from the camera (paragraph [0050]) in Hayakawa et al. (‘140) and the angle of the camera from which no signal is transmitted obviously becomes zero], and
maximizing a FOV of at least one second camera adjacent to the first camera [e.g., when a signal (no fault) is transmitted from the camera (paragraph [0050]) in Hayakawa et al. (‘140) which is obviously adjacent to a faulty camera, and the angle of the camera from which a signal (no fault) is transmitted obviously becomes the (maximized) angle of the camera field of view];
based on the received videos and the received ultrasonic data [e.g., in Joos et al. (‘175), based on the fusion map (232, 532) generated from the vehicle sensors in FIG. 3, including the cameras and ultrasonic sensors)], detecting at least one parking space and an object [e.g., the parked vehicles (in FIGS. 5A to 5C) in Hayakawa et al. (‘140), for determining parking space availability in Joos et al. (‘175); and e.g., paragraphs [0086], [0088], etc. in Joos et al. (‘175)] adjacent to the vehicle [e.g., in Joos et al. (‘175), at 233 in FIG. 2A, at 550 in FIG. 5C; and in FIGS. 5A to 5C in Hayakawa et al. (‘140)];
receiving a selection of one of the at least one detected parking space from a driver [e.g., in Joos et al. (‘175), executing 234 in FIG. 2A; and paragraph [0041], “Responsive to an indication from the mobile device 115 that a driver selects a specific one of the detected parking spaces . . .”]; and
generating parking guide information comprising at least one of a path along which the vehicle is guidable to the selected parking space [e.g., paragraph [0041] in Joos et al. (‘175), “a corresponding parking trajectory for performing the parking procedure for the designated parking space”], a steering angle [e.g., obvious from the corresponding parking trajectory in Joos et al. (‘175)], or a speed of the vehicle [e.g., paragraph [0062] in Joos et al. (‘175)],
wherein each of the plurality of cameras [e.g., 309 in Joos et al. (‘175)] is configured to capture a different section among sections preset by dividing a periphery of the vehicle [e.g., as shown (e.g., at R11 to R14) in FIG. 1 of Hayakawa et al. (‘140)], and each of the plurality of ultrasonic sensors [e.g., 308 in Joos et al. (‘175)] is configured to detect a different section among the sections [e.g., as shown (e.g., at R1 to R4) in FIG. 1 of Hayakawa et al. (‘140)],
per claim 29, depending from claim 28, wherein detecting the at least one parking space and the object adjacent to the vehicle is based on one or more available videos and the ultrasonic data corresponding section where the received video is unavailable [e.g., paragraphs [0050], [0068], [0070], [0076], [0081], etc. in Hayakawa et al. (‘140), wherein when a fault has occurred on the LS camera 303, the main control unit 214 detects the parking space PS by using only the detection signal of the LS sonar 203, and similarly that when the RS camera 304 has a fault, the main control unit executes the second parking process using only the detection signal of the RS sonar in normal state, with the “only” being understood to substitute the sonar data (only) for the camera and sonar data; and paragraph [0047] of Joos et al. (‘175), “For instance, the fusion map may reflect a combination of distancing data from an ultrasonic sensor and an image acquired at the same point in time by a camera, the combination of these data being fused with positioning data and compared to previously acquired maps of the external environment to determine the vehicle environment”];
per claim 30, depending from claim 29, further comprising:
determining whether a received video for a section including the selected parking space is available [e.g., the camera control unit 313 in Hayakawa et al. (‘140) determines whether or not a fault has occurred in the respective cameras, and detects a fault in a camera when no signal (e.g., obviously video) is transmitted from the camera; and implicit in executing 231 in FIG. 2A of Joos et al. (‘175), by fusing available sensor data (and obviously not fusing data which is not available)];; and
controlling the vehicle to perform the autonomous parking by using a first remote smart parking assist (RSPA) or a second RSPA [e.g., for remote fleet parking, as taught by Joos et al. (‘175) e.g., at paragraphs [0116], etc.; and using the first parking space detecting process or the second parking space detecting process (and the first/second angle adjusting and parking processes) as shown in FIGS. 6A to 6C of Hayakawa et al. (‘140)] based on availability of the received video for the section including the selected parking space [e.g., when there is no fault in a camera as taught by Hayakawa et al. (‘140), and the parking space PS is obviously no longer determined using the sonar alone (e.g., paragraphs [0068], [0070], [0076], [0081], etc.), but rather is detected using the sonar and image signal (e.g., paragraphs [0064], [0069], etc.)];
per claim 31, depending from claim 30, wherein controlling the vehicle comprises:
performing the autonomous parking by using the first RSPA [e.g., for remote fleet parking, as taught by Joos et al. (‘175) e.g., at paragraphs [0116], etc., performed e.g., in the presence of the camera fault(s) taught by Hayakawa et al. (‘140) as shown in FIGS. 6A to 6C] when the received video taken for section including the selected parking space is unavailable, and
performing the autonomous parking by using the second RSPA [e.g., as indicated by the respective first and second (parking space detecting, angle adjusting, and parking) processes as shown in FIGS. 6A to 6C in Hayakawa et al. (‘140), when there is no camera fault, obviously used for the remote fleet parking in Joos et al. (‘175)] when the received video taken for the section including the selected parking space is available;
per claim 35, an autonomous parking assist apparatus comprising:
at least one processor [e.g., FIGS. 1, 4, 9, etc. in Joos et al. (‘175); e.g., 461, 902, etc.]; and
a memory [e.g., 969 in FIG. 9 of Joos et al. (‘175), the medium in claim 17, etc.; e.g., paragraphs [0113], [0117], etc.] operably coupled to the at least one processor,
wherein the memory stores instructions [e.g., in Joos et al. (‘175), paragraphs [0113], [0117], claim 17, etc.] that, when executed by the at least one processor, cause the at least one processor to perform operations comprising:
receiving, from a plurality of cameras located at a vehicle, videos [e.g., in Joos et al. (‘175), 309; e.g., and the plurality of cameras positioned around the vehicle 101 and can be configured to provide a surround-view (obviously video, paragraphs [0039], [0105], etc.) of the vehicle 101 (paragraph [0036]), with the sensor data (e.g., images from the cameras) being updated in real-time (paragraph [0054]) in FIG. 2B, obviously as videos];
receiving, from a plurality of ultrasonic sensors located at the vehicle, ultrasonic data [e.g., in Joos et al. (‘175), from 308; and e.g., paragraph [0036], “the one or more sensors 105 including ultrasonic sensor(s)”; and the sonar sensors in Hayakawa et al. (‘140)];
determining whether each of the received videos is available [e.g., when the (obviously video) signal is or is not transmitted from the camera, in Hayakawa et al. (‘140), wherein the camera control unit 313 in Hayakawa et al. (‘140) determines whether a fault has occurred in the respective cameras, and detects a fault in a camera when no signal (e.g., obviously video) is transmitted from the camera, and the video is thus unavailable; and implicit in executing 231 in FIG. 2A of Joos et al. (‘175), by fusing available sensor data (and obviously not fusing data which is not available)] by using a video determination algorithm [e.g., paragraphs [0050], etc. in Hayakawa et al. (‘140)];
based on the received videos [e.g., the images (obviously video) with the changing pixels at paragraph [0050] in Hayakawa et al. (‘140), obviously obtained from the camera(s) 309 in Joos et al. (‘175); e.g., the plurality of cameras positioned around the vehicle 101 that can be configured to provide a surround-view (obviously video, paragraphs [0036], [0105], etc.) of the vehicle 101 (paragraph [0036]), with the sensor data (e.g., images from the cameras) being updated in real-time (paragraph [0054]) in FIG. 2B, obviously as videos], controlling the cameras by:
minimizing a field of view (FOV) of a first camera for which a received video is unavailable [e.g., when no signal is transmitted from the camera (paragraph [0050]) in Hayakawa et al. (‘140) and the angle of the camera from which no signal is transmitted obviously becomes zero], and
maximizing a FOV of at least one second camera adjacent to the first camera [e.g., when a signal (no fault) is transmitted from the camera (paragraph [0050]) in Hayakawa et al. (‘140) which is obviously adjacent to a faulty camera, and the angle of the camera from which a signal (no fault) is transmitted obviously becomes the (maximized) angle of the camera field of view];
based on the received videos and the received ultrasonic data, detecting at least one parking space and an object [e.g., the parking spaces and the parked vehicles (in FIGS. 5A to 5C) in Hayakawa et al. (‘140), for determining parking space availability in Joos et al. (‘175); and e.g., paragraphs [0086], [0088], etc. in Joos et al. (‘175)] in a vicinity of the vehicle [e.g., in Joos et al. (‘175), at 233 in FIG. 2A, at 550 in FIG. 5C; and in FIGS. 5A to 5C in Hayakawa et al. (‘140)];
receiving, from a driver of the vehicle, a selection of one of the at least one detected parking space [e.g., executing 234 in FIG. 2A of Joos et al. (‘175); and paragraph [0041], “Responsive to an indication from the mobile device 115 that a driver selects a specific one of the detected parking spaces . . .”]; and
generating parking guide information comprising a path along which the vehicle is guidable to the selected parking space [e.g., in Joos et al. (‘175), paragraph [0041], “a corresponding parking trajectory for performing the parking procedure for the designated parking space”],
wherein the cameras [e.g., 309 in Joos et al. (‘175)] and the ultrasonic sensors [e.g., 308 in Joos et al. (‘175)] are located at the vehicle to cover a perimeter of the vehicle [e.g., as shown in FIG. 1 of Hayakawa et al. (‘140)], each camera and each ultrasonic sensor covering specific pre-defined sections of the perimeter [e.g., as shown (e.g., at R1 to R4 and R11 to R14) in FIG. 1 of Hayakawa et al. (‘140)];
Claims 25 to 27 and 32 to 34 are rejected under 35 U.S.C. 103 as being unpatentable over Joos et al. (2019/0371175) in view of Hayakawa et al. (2021/0284140) as applied to claims 21 and 28 above, and further in view of Kobayashi et al. (2014/0324310).
Joos et al. (‘175) as implemented or modified in view of Hayakawa et al. (‘140) has been described above.
The implemented or modified Joos et al. (‘175) server, method, and medium for automated parking may not reveal the display unit or the generating of the image or video, as claimed, or the warning unit the warning, as claimed.
However, in the context/field of an improved parking assist control apparatus in which the steered angle of the steering wheel may be controlled to run the vehicle along the running path (paragraph [0033]), Kobayashi et al. (‘310) teaches e.g., at paragraphs [0031] to [0033], [0068], [0069], [0104], [0160], FIGS. 8, 13, 14, etc. that the parking assist apparatus may display the parking path by interposing it on a top view image (FIGS. 8 and 13) together with the vehicle and a parking frame of the parking target position, and outputs an alarm (paragraph [0146]) when an alarm operation distance Lsh is greater than a relative distance of an obstacle X to the vehicle 1.
It would have been obvious before the effective filing date of the claimed invention to implement or further modify the Joos et al. (‘175) server, method, and medium for automated parking so that the parking path would have been displayed (on a display of the vehicle controlled by the VECU in Joos et al. (‘175), such as the display unit 41 in Hayakawa et al. (‘140)) by interposing it on a top view image (FIGS. 8 and 13 in Kobayashi et al. (‘310)) together with the vehicle and a parking frame of the parking target position, as taught by Kobayashi et al. (‘310), and so that an alarm (paragraph [0146]) would have been outputted, as taught by Kobayashi et al. (‘310) when an alarm operation distance Lsh was greater than a relative distance of an obstacle X to the vehicle 1 as detected by an obstacle detector (13 in Kobayashi et al. (‘310) 308, etc. in Joos et al. (‘175)), in order that the vehicle operator/user/driver would be informed of the parking trajectory/path and warned of relatively close obstacles, with a reasonable expectation of success, and e.g., as a use of a known technique to improve similar devices (methods, or products) in the same way.
As such, the implemented or further modified Joos et al. (‘175) server, method, and medium for automated parking would have rendered obvious:
per claim 25, depending from claim 21, wherein the operations further comprise:
generating an image or video comprising the parking guide information, the vehicle, and the selected parking space [e.g., the display in paragraph [0031] to [0033], [0068], [0069], [0104], [0160], FIGS. 8, 13, etc. of Kobayashi et al. (‘310); the image display unit 40, 41 in Hayakawa et al. (‘140); and the display controlled by the VECU 902 in Joos et al. (‘175)], and
providing the generated image or the generated video to the driver [e.g., as shown in FIGS. 8, 13, etc. in Kobayashi et al. (‘310), and as described at paragraphs [0031] to [0033], [0068], [0069], [0104], [0160], etc.];
per claim 26, depending from claim 25, wherein the generated image or the generated video further comprises a notification about the the first camera [e.g., as taught by Hayakawa et al. (‘140) in paragraphs [0103], [0115], [0116], etc., “Thus, the occupant is able to recognize the state of the parking space detecting process.”];
per claim 27, depending from claim 21, wherein the operations further comprise warning the driver by using at least one of a visual output, an auditory output, or a tactile output based on the object being detected at a preset distance from the vehicle [e.g., as described at paragraphs [0146], etc. of Kobayashi et al. (‘310)];
per claim 32, depending from claim 28, further comprising:
generating an image or video comprising the parking guide information, the vehicle, and the selected parking space [e.g., the display in paragraph [0031] to [0033], [0068], [0069], [0104], [0160], FIGS. 8, 13, etc. of Kobayashi et al. (‘310); the image display unit 40, 41 in Hayakawa et al. (‘140); and the display controlled by the VECU 902 in Joos et al. (‘175)], and
providing the generated image or the generated video to the driver [e.g., as shown in FIGS. 8, 13, etc. in Kobayashi et al. (‘310), and as described at paragraphs [0031] to [0033], [0068], [0069], [0104], [0160], etc.];
per claim 33, depending from claim 32, wherein the generated image or the generated video further comprises a notification about the camera [e.g., as taught by Hayakawa et al. (‘140) in paragraphs [0103], [0115], [0116], etc., “Thus, the occupant is able to recognize the state of the parking space detecting process.”];
per claim 34, depending from claim 28, further comprising warning the driver by using at least one of a visual output, an auditory output, or a tactile output based on the object being detected at a preset distance from the vehicle [e.g., as described at paragraphs [0146], etc. of Kobayashi et al. (‘310)];
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David A Testardi whose telephone number is (571)270-3528. The examiner can normally be reached Monday, Tuesday, Thursday, 8:30am - 5:30pm E.T., and Friday, 8:30 am - 12:30 pm E.T.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached at (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID A TESTARDI/Primary Examiner, Art Unit 3664
1 Note that if this amendment is made to the independent claims, then in claim 22, line 5, and in claim 29, line 5, “a section where the received video is unavailable” should apparently be changed to, “the [[a]] section where the received video is unavailable”, for proper/precise antecedent basis.
2 See e.g., https://proofed.com/writing-tips/using-articles-a-an-the-before-acronyms-and-initialisms/
3 See the 2019 35 U.S.C. 112 Compliance Federal Register Notice (Federal Register, Vol. 84, No. 4, Monday, January 7, 2019, pages 57 to 63). See also http://ptoweb.uspto.gov/patents/exTrain/documents/2019-112-guidance-initiative.pptx . Quoting the FR Notice at pages 61 and 62, "The Federal Circuit emphasized that ‘‘[t]he written description requirement is not met if the specification merely describes a ‘desired result.’ ’’ Vasudevan, 782 F.3d at 682 (quoting Ariad, 598 F.3d at 1349). . . . When examining computer-implemented, software-related claims, examiners should determine whether the specification discloses the computer and the algorithm(s) that achieve the claimed function in sufficient detail that one of ordinary skill in the art can reasonably conclude that the inventor possessed the claimed subject matter at the time of filing. An algorithm is defined, for example, as 'a finite sequence of steps for solving a logical or mathematical problem or performing a task.' Microsoft Computer Dictionary (5th ed., 2002). Applicant may 'express that algorithm in any understandable terms including as a mathematical formula, in prose, or as a flow chart, or in any other manner that provides sufficient structure.' Finisar, 523 F.3d at 1340 (internal citation omitted). It is not enough that one skilled in the art could theoretically write a program to achieve the claimed function, rather the specification itself must explain how the claimed function is achieved to demonstrate that the applicant had possession of it. See, e.g., Vasudevan, 782 F.3d at 682–83. If the specification does not provide a disclosure of the computer and algorithm(s) in sufficient detail to demonstrate to one of ordinary skill in the art that the inventor possessed the invention that achieves the claimed result, a rejection under 35 U.S.C. 112(a) for lack of written description must be made. See MPEP § 2161.01, subsection I."
4 See http://www.uspto.gov/sites/default/files/documents/fnctnllnggcmptr.pptx at page 29.
5 See applicant’s arguments at page 10, lines 3 to 13 of the 23 October 2025 response.
6 See MPEP 2161.01, I. and LizardTech Inc. v. Earth Resource Mapping Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005) cited therein ("Whether the flaw in the specification is regarded as a failure to demonstrate that the applicant possessed the full scope of the invention recited in [the claim] or a failure to enable the full breadth of that claim, the specification provides inadequate support for the claim under [§ 112(a)]").
7 See MPEP 2173.02, I., “For example, if the language of a claim, given its broadest reasonable interpretation, is such that a person of ordinary skill in the relevant art would read it with more than one reasonable interpretation, then a rejection under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph is appropriate.”
8 See e.g., Bilski v. Kappos, 561 U.S. 593 ("Flook established that limiting an abstract idea to one field of use . . . did not make the concept patentable.")
9 The examiner understands from the Wikipedia article previously cited that the acoustic frequencies used in sonar systems obviously range from very low (infrasonic) to extremely high (ultrasonic), as would have been understood by those having ordinary skill in the art.. See also previously cited Hayakawa (2017/0253236) who teaches at paragraphs [0005], [0043], etc. that the distance measuring units (16, 17) in a parking assistance device are sonar items (ultrasonic detectors) using ultrasonic sonar.