Prosecution Insights
Last updated: April 19, 2026
Application No. 18/685,984

SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM

Non-Final OA §101§102§103
Filed
Feb 23, 2024
Examiner
PARK, EDWARD
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Sony Semiconductor Solutions Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
576 granted / 704 resolved
+19.8% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
731
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
21.3%
-18.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 704 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Contents Notice of Pre-AIA or AIA Status 2 Claim Interpretation 2 Claim Rejections - 35 USC § 101 5 Claim Rejections - 35 USC § 102 6 Claim Rejections - 35 USC § 103 8 Conclusion 13 Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to applicant’s claim set received on 2/23/24. Claims 1-16 are currently pending. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an image signal analysis unit configured to input a captured image captured by a monocular camera mounted on a vehicle, and determine whether an object in the captured image is a real object or a drawing object by image analysis of the input captured image” in claims 1-14. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 15, 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. Claims 1, 15 and 16 encompasses limitations that deem the claims to be mental processes, which does not integrate the abstract idea into a practical application. Furthermore, the claims do not recite significantly more or an inventive concept. Thus, the cited claims are considered to be non statutory subject matter. Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. In particular, the claim recites a program which is pure software and not deemed as statutory subject matter. Thus, the applicant is advised to recite a non-transitory computer readable medium that applies the program, rather than the program itself. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless - (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 15, 16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jia et al (VC: “Real-time obstacle detection with motion features using monocular vision”). Regarding claim 1, Jia discloses a signal processing device, comprising an image signal analysis unit configured to input a captured image captured by a monocular camera mounted on a vehicle (see abstract, pg. 283, 288, Hence, if we focus on a point in the real 3D world and know its projective points (actually, only the ordinate values) in two consecutive images, the hc can be calculated. Theoretically, we can obtain hcs of all points in two consecutive images… monocular vision…. processor), and determine whether an object in the captured image is a real object or a drawing object by image analysis of the input captured image (see abstract; can effectively tell apart obstacles from shadows and road surface markings…. We propose the followings: (1) a two consecutive frames (TCF) model to find the differences between obstacles and the ground plane by motion features; (2) a filter to increase probabilities of obstacle regions; (3) an updating process to reduce false positives and update the algorithm when the vehicle moves on. We perform experiments on two datasets and our autonomous vehicle. The results show that our method is effective in various conditions and meets the real-time requirement). Regarding claim 15, Jia discloses a signal processing method executed in a signal processing device, the method including executing image signal analysis processing by an image signal analysis unit of inputting a captured image captured by a monocular camera mounted on a vehicle (see abstract, pg. 283, 288, Hence, if we focus on a point in the real 3D world and know its projective points (actually, only the ordinate values) in two consecutive images, the hc can be calculated. Theoretically, we can obtain hcs of all points in two consecutive images… monocular vision…. processor), and determining whether an object in the captured image is a real object or a drawing object by image analysis of the input captured image (see abstract; can effectively tell apart obstacles from shadows and road surface markings…. We propose the followings: (1) a two consecutive frames (TCF) model to find the differences between obstacles and the ground plane by motion features; (2) a filter to increase probabilities of obstacle regions; (3) an updating process to reduce false positives and update the algorithm when the vehicle moves on. We perform experiments on two datasets and our autonomous vehicle. The results show that our method is effective in various conditions and meets the real-time requirement). Regarding claim 16, Jia discloses a program (see section 5.5, abstract; algorithm) causing a signal processing device to execute signal processing, the program causing an image signal analysis unit to perform image signal analysis processing of inputting a captured image captured by a monocular camera mounted on a vehicle (see abstract, pg. 283, 288, Hence, if we focus on a point in the real 3D world and know its projective points (actually, only the ordinate values) in two consecutive images, the hc can be calculated. Theoretically, we can obtain hcs of all points in two consecutive images… monocular vision…. processor), and determining whether an object in the captured image is a real object or a drawing object by image analysis of the input captured image (see abstract; can effectively tell apart obstacles from shadows and road surface markings…. We propose the followings: (1) a two consecutive frames (TCF) model to find the differences between obstacles and the ground plane by motion features; (2) a filter to increase probabilities of obstacle regions; (3) an updating process to reduce false positives and update the algorithm when the vehicle moves on. We perform experiments on two datasets and our autonomous vehicle. The results show that our method is effective in various conditions and meets the real-time requirement). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimedinvention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Jia et al (VC: “Real-time obstacle detection with motion features using monocular vision”) in view of Tseng (US 6,765,480 B2). Regarding claim 12, Jia teaches all elements as mentioned above in claim 1. Jia does not teach expressly an image analysis unit configured to determine whether or not the vehicle is traveling in an optical axis direction of the monocular camera, and the image signal analysis unit executes, in a case where the image analysis unit determines that the vehicle is traveling in the optical axis direction of the monocular camera, processing of determining whether the object in the captured image is a real object or a drawing object. Tseng, in the same field of endeavor, teaches an image analysis unit configured to determine whether or not the vehicle is traveling in an optical axis direction of the monocular camera, and the image signal analysis unit executes, in a case where the image analysis unit determines that the vehicle is traveling in the optical axis direction of the monocular camera, processing of determining whether the object in the captured image is a real object or a drawing object (see col. 3, lines 30-67 col. 4, lines 30-67, col. 8, lines 1-50). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Jia to utilize the cited limitations as suggested by Tseng. The suggestion/motivation for doing so would have been to enhance driving safety by aiding road vehicle driving (see col. 1, lines 50-60). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Jia, while the teaching of Tseng continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Jia et al (VC: “Real-time obstacle detection with motion features using monocular vision”) in view of Ferguson et al (US 10,317,906 B2). Regarding claims 13-14, Jia teaches all elements as mentioned above in claim 1. Jia does not teach expressly outputs a determination result of whether the object in the captured image is a real object or a drawing object to a vehicle control unit, and the vehicle control unit outputs a warning in a case where the determination result that the object in the captured image is a drawing object is input from the image signal analysis unit; starts, in a case where the determination result that the object in the captured image is a drawing object is input from the image signal analysis unit and the vehicle is on automated driving operation, processing of shifting the vehicle from automated driving to manual driving. Ferguson, in the same field of endeavor, teaches outputs a determination result of whether the object in the captured image is a real object or a drawing object to a vehicle control unit, and the vehicle control unit outputs a warning in a case where the determination result that the object in the captured image is a drawing object is input from the image signal analysis unit (see col. 1, lines 17-67, col. 2, lines 1-30, col. 3, lines 30-67, col. 10, lines 1-30, col. 9, lines 1-67); starts, in a case where the determination result that the object in the captured image is a drawing object is input from the image signal analysis unit and the vehicle is on automated driving operation, processing of shifting the vehicle from automated driving to manual driving (see col. 1, lines 17-67, col. 2, lines 1-30, col. 3, lines 30-67, col. 10, lines 1-30, col. 9, lines 1-67). It would have been obvious (before the effective filing date of the claimed invention) or (at the time the invention was made) to one of ordinary skill in the art to modify Jia to utilize the cited limitations as suggested by Ferguson. The suggestion/motivation for doing so would have been to enhance safety and efficiency of autonomous vehicles (see col. 3, lines 50-67). Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in the manner explained above using known engineering design, interface and/or programming techniques, without changing a “fundamental” operating principle of Jia, while the teaching of Ferguson continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Allowable Subject Matter Claims 2-11 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claims 2-4, none of the references of record alone or in combination suggest or fairly teach wherein the image signal analysis unit includes a FOE (focus of expansion) position detector configured to detect a FOE (focus of expansion) position from the captured image, and a drawing object determination unit configured to determine whether the object in the captured image is a real object or a drawing object on a basis of a temporal change of a FOE (focus of expansion) position detected by the FOE (focus of expansion) position detector, and the drawing object determination unit determines, in a case where a change amount per unit time of a FOE (focus of expansion) position is equal to or larger than a predetermined threshold value, that the object in the captured image is a drawing object. Regarding claims 5-6, none of the references of record alone or in combination suggest or fairly teach wherein the image signal analysis unit includes a lane detector configured to detect a traveling lane on which the vehicle travels from the captured image, and a drawing object determination unit configured to determine whether an object in the captured image is a real object or a drawing object on a basis of a temporal change of a lane width of the traveling lane detected by the lane detector, and the drawing object determination unit determines, in a case where a change amount per unit time of a lane width is equal to or larger than a predetermined threshold value, that the object in the captured image is a drawing object. Regarding claims 7-10, none of the references of record alone or in combination suggest or fairly teach wherein the image signal analysis unit includes a ground level detector configured to detect a ground level from the captured image, and a drawing object determination unit configured to determine whether the object in the captured image is a real object or a drawing object on a basis of a difference between a grounding position of a grounding object in the captured image and a ground level position corresponding to the grounding position of the grounding object, and the drawing object determination unit determines, in a case where the difference between the grounding position of the grounding object in the captured image and the ground level position corresponding to the grounding position of the grounding object is equal to or larger than a predetermined threshold value, that the object in the captured image is a drawing object. Regarding claim 11, none of the references of record alone or in combination suggest or fairly teach wherein the image signal analysis unit determines, in a case where a change amount per unit time of a FOE (focus of expansion) position detected from the captured image is smaller than a predetermined threshold value, and a change amount per unit time of a lane width of a traveling lane on which the vehicle travels detected from the captured image is smaller than a predetermined threshold value, and a difference between a ground level detected from the captured image and a grounding position of a grounding object in the captured image is smaller than a predetermined threshold value, that the object in the captured image is a real object. Conclusion Claims 1, 12-16 are rejected. Claims 2-11 are objected to as being dependent upon a rejected base claim. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD PARK. The examiner’s contact information is as follows: Telephone: (571)270-1576 | Fax: 571.270.2576 | Edward.Park@uspto.gov For email communications, please notate MPEP 502.03, which outlines procedures pertaining to communications via the internet and authorization. A sample authorization form is cited within MPEP 502.03, section II. The examiner can normally be reached on M-F 9-6 CST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer, can be reached on (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD PARK/ Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Feb 23, 2024
Application Filed
Mar 24, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602911
SYSTEMS AND METHODS FOR HANDWRITING RECOGNITION USING OPTICAL CHARACTER RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12602815
WEAKLY PAIRED IMAGE STYLE TRANSFER METHOD BASED ON POSE SELF-SUPERVISED GENERATIVE ADVERSARIAL NETWORK
2y 5m to grant Granted Apr 14, 2026
Patent 12597173
AUTOMATIC GENERATION OF AN IMAGE HAVING AN ATTRIBUTE FROM A SUBJECT IMAGE
2y 5m to grant Granted Apr 07, 2026
Patent 12594023
METHOD AND DEVICE FOR PROVIDING ALOPECIA INFORMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12592000
SYSTEMS AND METHODS FOR PROCESSING DIGITAL IMAGES TO ADAPT TO COLOR VISION DEFICIENCY
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+18.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 704 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month