Prosecution Insights
Last updated: April 19, 2026
Application No. 18/752,821

Monitoring Method for Safe Area and Safety System and Displaying Method Thereof

Non-Final OA §102§103§112
Filed
Jun 25, 2024
Examiner
WERNER, DAVID N
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Whetron Electronics Co. Ltd.
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
483 granted / 713 resolved
+9.7% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
23.1%
-16.9% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION This is the First Action on the Merits for U.S. Patent Application Publication No. 18/752,821, filed 25 June 2024, which claims foreign priority to Taiwan Application No. TW113117218, filed 9 May 2024. Claims 1–13 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Requirement for Information – 37 C.F.R. § 1.105 The following is a quotation of the appropriate sections of 37 C.F.R. § 1.105(a)(1) that form the basis for the Requirement for Information under this section made in this Office action: In the course of examining or treating a matter in a pending or abandoned application filed under 35 U.S.C. 111 or 371 (including a reissue application), in a patent, or in a reexamination proceeding, the examiner or other Office employee may require the submission, from individuals identified under § 1.56(c), or any assignee, of such information as may be reasonably necessary to properly examine or treat the matter, for example: (ii) Search: Whether a search of the prior art was made, and if so, what was searched. (viii) Technical information known to applicant. Technical information known to application concerning the related art, the disclosure, the claimed subject matter, other factual information pertinent to patentability, or concerning the accuracy of the examiner’s stated interpretation of such items. Applicant and the assignee of this application are required under 37 C.F.R. § 1.105 to provide the following information that the examiner has determined is reasonably necessary to the examination of this application. The information is required to enter in the record the art suggested by the applicant as relevant to this examination in the European Search Report for corresponding application EP 4648025 A1 and the Taiwan Examination Report for corresponding Application TW 113117218. In response to this requirement, please provide a copy of the following documents cited in the European Search Report: Huang Po-Yuan et al: “Rea Obstacle Warning for Reverse Driving using Stereo Vision Techniques”, 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), IEEE, 6 October 2019 (2019-10-06) Yin Ziyi et al: “Learning to Plan Semantic Free-Space Boundary”, 2019 IEEE International Conference on Image Processing (ICIP), IEEE, 22 September 2019 (2019-09-22). In response to this requirement, please provide a copy of the following documents cited in the Taiwan Examination Report: CN 114291077 A CN 103253263 A CN 116597609 A CN 114582132 A In responding to those requirements that require copies of documents, where the document is a bound text or a single article over 50 pages, the requirement may be met by providing copies of those pages that provide the particular subject matter indicated in the requirement, or where such subject matter is not indicated, the subject matter found in applicant’s disclosure. The fee and certification requirements of 37 C.F.R. § 1.97 are waived for those documents submitted in reply to this requirement. The other requirements of 37 C.F.R. § 1.98 are not waived. This waiver extends only to those documents within the scope of this requirement under 37 C.F.R. § 1.105 that are included in the applicant’s first complete communication responding to this requirement. Any supplemental replies subsequent to the first communication responding to this requirement and any information disclosures beyond the scope of this requirement under 37 C.F.R. § 1.105 are subject to the fee and certification requirements of 37 C.F.R. § 1.97. The applicant is reminded that the reply to this requirement must be made with candor and good faith under 37 C.F.R. § 1.56. Where the applicant does not have or cannot readily obtain an item of required information, a statement that the item is unknown or cannot be readily obtained may be accepted as a complete reply to the requirement for that item. This requirement is an attachment of the enclosed Office action. A complete reply to the enclosed Office action must include a complete reply to this requirement. The time period for reply to this requirement coincides with the time period for reply to the enclosed Office action. Claim Objections Claims 3–5 and 12 are objected to for minor informalities. See the claim rejections infra for suggested corrections. Claims 1–6, 9, 10, and 12 are objected to for want of conformity to 37 C.F.R. § 1.75(i), which requires each element or step of a claim to be separated by a line indentation. Appropriate correction is required. Claim Rejections - 35 U.S.C. § 112 The following is a quotation of 35 U.S.C. § 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 10 and 11 are rejected under 35 U.S.C. § 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. § 112, the applicant), regards as the invention. Claim 10 is directed to a “safety system” comprising a processing module1 that is “configured to” perform the claim 1 method. It is unclear whether Applicant’s intent for infringing this claim requires actually performing the claim 1 method, or merely making, selling, or offering to sell the claim 10 driving target is sufficient to infringe claim 10. IPXL Holdings LLC v. Amazon.com Inc., 77 U.S.P.Q. 1140, 1145 (Fed. Cir. 2005) (claim is indefinite if it cannot be determined if infringement occurs at the creation of a recited structure or its use); see also M.P.E.P. § 2173.05(p)(ii) (single claim directed to apparatus and method steps of its use is indefinite under 35 U.S.C. § 112(b)). It is suggested that Applicant amend the claim to be unambiguously in independent form or unambiguously in dependent form. Compare with claim 12, which, while in an unusual format, is unambiguously dependent on claim 1. Claim Rejections - 35 U.S.C. § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. § 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1–11 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by W. Song, Y. Yang, M. Fu, F. Qiu, & M. Wang, “Real-Time Obstacles Detection and Status Classification for Collision Warning in a Vehicle Active Safety System”, 19 IEEE Transactions on Intelligent Transportation Systems 758–773 (March 2018) (“Song”)2. Song, directed to a vehicle active safety system, teaches with respect to claim 1 a monitoring method for a safe area, performed by a processing module having a processor (p. 770, Intel® Core i5 processor) and a pre-established image identification model (pp. 768–770, KITTI dataset), comprising the following steps: obtaining an input image (Fig. 4, original image), wherein the input image is generated from a driving target in a predefined viewing direction (id., image forward of vehicle), and the input image has an image corresponding to an obstacle (id., detected obstacle regions of interest ); generating a safe area to a periphery of an image of the driving target in the input image (Fig. 1, detection area that is not in the dangerous area); generating a predetermined route on the periphery of the image (Fig. 23, obstacles do not present a danger to vehicle turning right; Fig. 17, dangerous areas determined based on speed and steering wheel angle of vehicle), and defining a continuous area that does not belong to the obstacle from a predetermined range extended from the predetermined route to generate a modified area (id., path within flow of traffic; obstacles are considered not dangerous), wherein in a situation that the predetermined range includes a part of an obstacle, a part of a boundary of the modified area corresponds to a part of a contour of the obstacle (Figs. 16, 18; dynamic determining of object status as danger, potential danger, or no danger); and determining whether the safe area is entirely included in the modified area (Fig. 23, obstacles are not in path); wherein in a result of the determination is that the safe area is not entirely included in the modified area, the obstacle has invaded the safe area (Fig. 24, finding of potential danger within path). Regarding claim 2, Song teaches the monitoring method for a safe area as claimed in claim 1, wherein before determining whether the safe area is entirely included in the modified area, the method further comprises: identifying the obstacle from the input image by the image identification model to generate an obstacle area corresponding to the obstacle (Figs. 22–23, detecting pedestrian or car), and determining whether an intersection3 exists between the safe area and the obstacle area (id., determining whether vehicle path intersects detected obstacle); in a result that the determination is that the intersection exists, then determining whether the safe area is entirely included in the modified area (Figs. 22–24, classifying danger level). Regarding claim 3, Song teaches the monitoring method for a safe area as claimed in claim 1, wherein a determining manner is applied to determine whether the safe area is entirely included in the modified area (Fig. 23, obstacles are not in path), and the determining manner is determining whether an area size mutually intersected by the modified area, the safe area, and the obstacle area is equal to an area size intersected by the safe area and the obstacle area (Figs. 16, 18; dynamic determining of object status as danger, potential danger, or no danger; limitation is logically equivalent to saying that the modified area remains the safe area); in a result [[of]]that the determination is equal, the obstacle has not invaded the safe area (id., continuing to determine there is no dangerous object in vehicle path); and in a result [[of]]that the determination is not equal, the obstacle has invaded the safe area (id., detecting new object). Regarding claim 4, Song teaches the monitoring method for a safe area as claimed in claim 1, wherein a determining manner is applied to determine whether the safe area is entirely included in the modified area (Fig. 23, obstacles are not in path), and the determining manner is determining whether an area size intersected by the modified area and the safe area is equal to an area size of the safe area (Figs. 16, 18; dynamic determining of object status as danger, potential danger, or no danger; limitation is logically equivalent to determining whether the modified area and safe area remain equal); in a result [[of]]that the determination is equal, the obstacle has not invaded the safe area (id., continuing to determined there is no dangerous object in vehicle path) in a result [[of]]that the determination is not equal, the obstacle has invaded the safe area (id., detecting new dangerous object). Regarding claim 5, Song teaches the monitoring method for a safe area as claimed in claim 1, wherein a determining manner is applied to whether the safe area is entirely included in the modified area (Fig. 23, obstacles are not in path), and the determining manner is determining whether an intersection exists between a difference area and the safe area (Figs. 18, 24; potentially dangerous object), and the difference area is defined by an area subtracting the modified area from the input image (Fig. 18, potentially dangerous object is not in the vehicle path); [[in]]if a result of the determination is that the intersection does not exist, the obstacle has not invaded the safe area (Figs. 16, 18; dynamic resolution of potentially dangerous object to no danger); and in a result of the determination is that the intersection exists, the obstacle has invaded the safe area (id., dynamic resolution of potentially dangerous object to danger). Regarding claim 6, Song teaches the monitoring method for a safe area as claimed in claim 1, wherein in a process of generating the modified data, the continuous area is defined by all pixel points generated from each route pixel point on the predetermined route extending to a boundary pixel point in a first direction (Figs. 16–18, boundary of vehicle pathway); wherein each boundary pixel point is defined by determining whether an extended pixel point extended by each route pixel point on the predetermined route in the first direction is an obstacle pixel point corresponding to the obstacle (Fig. 18, pixels corresponding to projected path and dangerous object with velocity vector); in a situation that all extended pixel points extended by a route pixel point on the predetermined route in the first direction are determined not belonging to the obstacle pixel point, a boundary pixel point is defined by the extended pixel point determined as a pixel point on a boundary of the input image (Fig. 17, pathway box without obstacles); and the continuous area is defined by all of the route pixel points, the extended pixel points, and the boundary pixel points to correspondingly generate the modified area (Fig. 16, dynamic model). Regarding claim 7, Song teaches the monitoring method for a safe area as claimed in claim 6, wherein in the situation that the extended pixel point extended by the route pixel point on the predetermined route in the first direction is determined as the obstacle pixel point (Fig. 18, object within pathway), the boundary pixel point is defined by another extended pixel point preceding the extended pixel point initially determined as the obstacle pixel point (id., vector of object). Regarding claim 8, Song teaches the monitoring method for a safe area as claimed in claim 1, wherein a range of the continuous area is limited in the safe area (Fig. 23, safe pathway). Regarding claim 9, Song teaches the monitoring method a safe area as claimed in claim 1, wherein an entirety or a part of an image area size of the safe area of the driving target changes with a driving condition (Fig. 17, projected car path changes with speed and steering wheel angle), and the driving condition comprises at least one of a traveling speed, a traveling direction, a current driving scene, and a turning signal (id.). Regarding claim 10, Song teaches a safety system applied to a driving target (passim, ego vehicle) and comprising: an image capturing unit, arranged on the driving target and configured to obtain an input image of a peripheral environment of the driving target (p. 760, “cameras are also used in our system to detect dynamic objects”; and a processing module, coupled to the image capturing unit (p. 770, machine with processor and memory), wherein the processing module has a processor (id., Intel® Core i5 processor) and a pre-established identification module (pp. 768–770, KITTI dataset) and is configured to receive the input image (p. 770, 15 frames per second) and perform the monitoring method for a safe area according to claim 1 (claim 1 rejection supra). Regarding claim 11, Song teaches the safety system according to claim 10, further comprising at least one of . . . a steering system . . . coupled to the processing module (Fig. 17, steering wheel angle known). Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 12 and 13 are rejected under 35 U.S.C. § 103 as being unpatentable over Song in view of U.S. Patent Application Publication No. 2014/0118550 A1 (“Yoon”). Claims 12 and 13 are directed to displaying information derived from the claim 1 method. Song does not specify any specific display, only image analysis. However, Song in view of Yoon teaches a displaying method comprising: presenting an information in the monitoring method for a safe area according to claim 1 (claim 1 rejection supra) on a display (Yoon ¶ 0011, displaying around view monitoring area through a vehicle display); wherein[[,]] the information includes the input image (¶ 0047, display peripheral area surrounding vehicle), and further includes at least one of . . . an obstacle area of the obstacle (id., displaying obstacle area). It would have been obvious to one of ordinary skill in the art at the time of effective filing to display the danger obstacles determined by Song on a vehicle display, as in Yoon, so as to warn the driver. Regarding claim 13, Song in view of Yoon teaches the displaying method according to claim 12, wherein in a result of determination is that the safe area is not entirely included in the modified area (claim 1 rejection supra), an invading area, intersected by the difference area and the safe area, is continuously displayed or flashes with a preset color (Yoon ¶ 0047, display obstacle area using a color, contrast, or pattern). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following prior art was found using an Artificial Intelligence assisted search using an internal AI tool that uses the classification of the application under the Cooperative Patent Classification (CPC) system, as well as from the specification, including the claims and abstract, of the application as contextual information. Where possible, English-language equivalents are given, and redundant results within the same patent families are eliminated. See “New Artificial Intelligence Functionality in PE2E Search”, 1504 OG 359 (15 November 2022), “Automated Search Pilot Program”, 90 F.R. 48,161 (8 October 2025). WO 2022/078463 A1 CN 112784671 A US 2020/0216091 A1 EP 3651138 A1 US 2020/0391745 A1 US 2015/0063647 A JP 2006-350699 A This Office action has an attached requirement for information for 37 C.F.R. § 1.105. A complete reply to this Office action must include a complete reply to the attached requirement for information. The time period for reply to the attached requirement coincides with the time period for reply to this Office action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to David N Werner whose telephone number is (571)272-9662. The examiner can normally be reached M--F 7:30--4:00 Central. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Dave Czekaj can be reached at 571.272.7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /David N Werner/Primary Examiner, Art Unit 2487 1 The structures disclosed in the “image capturing unit” and “processing module” limitations place claim 10 outside the scope of 35 U.S.C. § 112(f). 2 This reference was cited as an ‘X’ reference in the European Search Report for corresponding European Application No. EP4648025 and was listed in the 27 August 2025 Information Disclosure Statement. 3 The specification defines an “intersection” as an intersection of areas, not a road intersection. Specification ¶¶ 0057, 59–61. See M.P.E.P. §§ 2111.01(IV), 2173.05(a)(III) (applicant as own lexicographer).
Read full office action

Prosecution Timeline

Jun 25, 2024
Application Filed
Mar 19, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598312
OVERHEAD REDUCTION IN MEDIA STORAGE AND TRANSMISSION
2y 5m to grant Granted Apr 07, 2026
Patent 12598297
METHOD AND APPARATUS FOR RECONSTRUCTING 360-DEGREE IMAGE ACCORDING TO PROJECTION FORMAT
2y 5m to grant Granted Apr 07, 2026
Patent 12593144
SOLID STATE IMAGING ELEMENT, IMAGING DEVICE, AND SOLID STATE IMAGING ELEMENT CONTROL METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587754
METHOD FOR DYNAMIC CORRECTION FOR PIXELS OF THERMAL IMAGE ARRAY
2y 5m to grant Granted Mar 24, 2026
Patent 12587689
METHOD AND APPARATUS FOR RECONSTRUCTING 360-DEGREE IMAGE ACCORDING TO PROJECTION FORMAT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
84%
With Interview (+16.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month