Prosecution Insights
Last updated: April 19, 2026
Application No. 18/278,940

METHOD AND APPARATUS FOR ROAD INSPECTION

Non-Final OA §102§112
Filed
Aug 25, 2023
Examiner
MARINI, MATTHEW G
Art Unit
2853
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
82%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
641 granted / 1060 resolved
-7.5% vs TC avg
Strong +21% interview lift
Without
With
+21.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
68 currently pending
Career history
1128
Total Applications
across all art units

Statute-Specific Performance

§101
13.1%
-26.9% vs TC avg
§103
45.2%
+5.2% vs TC avg
§102
28.0%
-12.0% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1060 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 and 17-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “a first video clip from the at least part of the video stream is compared with a second video clip to determine a vibration reflecting a quality of a road section of the road”. The examiner is unclear how the two video clips are compared in a manner that reflects vibration of a road. What is being compared in the clips and how can comparing two images determine a vibration reflecting road quality? Clarification is required. Note: Claims 14 and 17 are rejected similarly. Claims 4 and 20 recite that the second video clip is associated with a type of the vehicle related to the user equipment. The type of association is unclear. How is a video clip considered to be associated with a type of vehicle? What are the structurally limitations regarding “associated”? Clarification is required. Claims 5 and 21 recites “the vibration includes one or more offsets of the first video clip to the second video clip”, however the examiner is unsure how the vibration includes offsets, when the vibration is determined from a video clip? Offset from what? Clarification is required. Claim 6 recites “the vibration matches a vibration pattern, the vibration indicates one or more of: a potential damage of the road section; a level of the potential damage; a road facility set on or around the road section being malfunctioned and/or moved; a need for adjusting quality inspection of the road section; and an abnormal status of the user equipment discloses some very generic issues that could explain observed vibration patterns”. How and where are these patterns collected, determined, or used? What is doing the matching? If the clips are collected and used to determine vibrations, which is not clear (see the above 112(b) rejection of claim 1), how are vibration patterns determined and then matched? Clarification is required. Claim 7 is unclear. Is a vibration pattern defined by the fact that one or more offset(s) is/are within a predefined range? This definition does not seem compatible with the standard meaning of the term vibration, i.e. a repetitive behavior. The term must be further clarified. Claim 8 discloses that "the first video clip is determined by the user equipment and/or server". What does "determined" mean exactly in this context? Clarification is required. Claim 10 discloses that "the comparison between the first video clip and the second video clip is based at least in part on one or more of: a relative position of the user equipment to a road facility; an absolute position of the user equipment; and a trajectory of the user equipment". The examiner is unsure how video clips, i.e. images, is based on a position of a road facility, an absolute position, or a trajectory. Clarification is required. With respect to claim 12, the claim recites “the server is implemented at”. How is a server implemented? What does that structurally mean? Clarification is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-15 and 17-21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yonekawa et al. (2018/0195973). With respect to claim 1, Yonekawa et al. teaches a method performed by a user equipment (i.e. an outboard camera 21 of vehicle 20 having computer elements 1 and 2; [0035]), comprising: capturing a video stream of a road (as Yonekawa et al. teaches the camera 21 captures moving images during a driving operation of a vehicle; [0035] [0040]); and transmitting at least part of the video stream to a server (113; as Yonekawa et al. teaches at least image data captured from the video stream is sent to a web server; [0059] [0310]), wherein a first video clip (i.e. as an image from the video) from the captured video (via 21) from the at least part of the video stream is compared with a second video clip (i.e. a previously collected image) to determine a vibration reflecting a quality of a road section of the road (as Yonekawa teaches calculating a crack ratio using a first image clip and a second image clip, where the second image clip was a previously collected image, to determine crack growth; based on the calculated ratio, the taught method determines a quality of the road section based on if the crack is growing; a larger crack would reflect poor road quality, resulting in increased vibrations when driving thereon; [0167-0171]; therefore, insofar as how “to determine a vibration reflecting a quality of a road section of the road” is structurally defined, the claimed invention is taught by Yonekawa et al.). With respect to claims 2 and 18, Yonekawa et al. teaches the method wherein the second video clip (i.e. the previous collected image) is a reference video clip used as a quality baseline of the road section (the previously collected image serves as a historical baseline for calculating the crack ratio, thereby providing a quality baseline for monitoring changes in road quality). With respect to claims 3 and 19, Yonekawa et al. teaches the method wherein the second video clip (i.e. the previously collected image) corresponds to the road section (i.e. the portion of the road having the crack) with a quality equal to a predefined level (as the second image defines the quality of the road in the past, thereby defining a predefined level from a previous time, as Yonekawa et al. teaches in [0170] using that past image to define a previous road quality to determine if the quality of the road at that present moment is better or worse). With respect to claims 4 and 20, Yonekawa et al. teaches the method wherein the user equipment is a vehicle (20 having camera 21), and the second video clip (i.e. past image) is associated with a type of the vehicle (as the image is collected from the same vehicle, thereby being the same type; as best understood in light of the above 112(b) rejection). With respect to claims 5 and 21, Yonekawa et al. teaches the method wherein the vibration includes one or more offsets of the first video clip relative to the second video clip (as the calculated ratio defines an offset in certain technical and image processing contexts; therefore, the calculated ratio using past and present images provides an indication of vibrations when traveling over the crack, thereby reading on the claimed invention; as best understood). With respect to claim 6, Yonekawa et al. teaches the method wherein when the vibration matches a vibration pattern, the vibration indicates a potential damage of the road section (as the calculated ratio indicates a growing crack, therefore would create a vibration indicative of the road quality due to the crack and thereby matching a vibration pattern indicating a growing crack; insofar as how a vibration pattern is structurally determined, sensed or collected; see the above 112(b) rejection). With respect to claim 7, Yonekawa et al. teaches the method wherein the vibration pattern includes one video offset within a predefined range (as the calculated ratio is calculated from images collected in the present and the past; therefore the ratio, i.e. the calculated video offset, will have a predefined ranged defined by the images; as best understood by the examiner in light of the above 112(b) rejection). With respect to claim 8, Yonekawa et al. teaches the method wherein the first video clip is determined by the user equipment (as best understood, the camera 21 determines the first video clip by collecting those images). With respect to claim 9, Yonekawa et al. teaches the method wherein the first video clip (i.e. image collected by the camera 21) is determined according to the user equipment entering (21) entering and/or leaving the road section (as the camera determines the clip when the vehicle is entering the road section containing the crack; as best understood in light of the 112(b) rejection). With respect to claim 10, Yonekawa et al. teaches the method wherein the comparison between the first video clip and the second video clip (as the examiner considers the calculation of the ratio as the comparison) is based at least in part on a trajectory of the user equipment (i.e. direction of the camera 21; [0035]). With respect to claim 11, Yonekawa et al. teaches the method further comprising: receiving a notification from the server (113), wherein the notification (as a display 112 displays a notification) indicates a potential damage of the road section (as Yonekawa et al. teaches displaying the potential damage of the road section by superimposing the determine cracks; [0086]). With respect to claim 12, Yonekawa et al. teaches all that is claimed further defining the alternative selected by the examiner in claim 11. Therefore, the limitations of claim 12 do not further limit the examiner elected alterative seen in claim 11 over the prior art. With respect to claim 13, Yonekawa et al. teaches the method wherein the server (113) is implemented at the user equipment (20/21). With respect to claim 14, Yonekawa et al. teaches a user equipment (1/2/20/21), comprising: one or more processors (as indirectly taught, as the disclosed invention in Yonekawa operates in a computer environment); and one or more memories [0184] comprising computer program codes [0314], the one or more memories [0184] and the computer program codes [0314] configured to, with the one or more processors (as indirectly taught, as the disclosed invention in Yonekawa operates in a computer environment), cause the user equipment (1/2/20/21) at least to capture a video stream of a road (as Yonekawa et al. teaches the camera 21 captures moving images during a driving operation of a vehicle; [0035] [0040]); and transmit at least part of the video stream to a server (113; as Yonekawa et al. teaches at least image data related to a pavement crack analysis is sent to a web server; [0059] [0310]), wherein a first video clip (i.e. as an image) from the captured video (via 21) from the at least part of the video stream is compared with a second video clip (i.e. a previously collected image) to determine a vibration reflecting a quality of a road section of the road (as Yonekawa teaches calculating a crack ratio using a first image clip and a second image clip, where the second image clip was a previously collected image, to determine crack growth; based on the calculated ratio, the method determines a quality of the road section based on if the crack is growing; a larger crack would reflect poor road quality, resulting in increased vibrations when driving thereon; [0167-0171]; therefore, insofar as how “to determine a vibration reflecting a quality of a road section of the road” is structurally defined, the claimed invention is taught by Yonekawa et al.). With respect to claim 15, Yonekawa et al. teaches the user equipment (1/2/20/21) according to claim 14, wherein the one or more memories [0184] and the computer program codes [0314] are configured to, with the one or more processors (as indirectly taught), cause the user equipment (1/2/20/21) to perform further observations comprising: receive a notification from the server (113), wherein the notification (as a display 112 displays a notification) indicates a potential damage of the road section (as Yonekawa et al. teaches displaying the potential damage of the road section by superimposing the determine cracks; [0086]). With respect to claim 17, Yonekawa et al. teaches a method performed by a server (i.e.1/2 as these elements are disclosed as being a variety of computing elements, including a server; [0182]), comprising: receiving at least part of a video stream captured for a road from a user equipment (as Yonekawa et al. teaches the camera 21 captures moving images during a driving operation of a vehicle; [0035] [0040]); and determining a vibration reflecting a quality of a road section of the road, by comparing a first video clip (i.e. an image) from the at least part of the video stream with a second video clip (as Yonekawa teaches calculating a crack ratio using a first image clip and a second image clip, where the second image clip was a previously collected image, to determine crack growth; based on the calculated ratio, the method determines a quality of the road section based on if the crack is growing; a larger crack would reflect poor road quality, resulting in increased vibrations when driving thereon; [0167-0171]; therefore, insofar as how “to determine a vibration reflecting a quality of a road section of the road” is structurally defined, the claimed invention is taught by Yonekawa et al.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Shimomura et al. (2013/0169794) which teaches image data related to road quality. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW G MARINI whose telephone number is (571)272-2676. The examiner can normally be reached Monday-Friday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Meier can be reached at 571-272-2149. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHEW G MARINI/Primary Examiner, Art Unit 2853
Read full office action

Prosecution Timeline

Aug 25, 2023
Application Filed
Jan 16, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599201
Printable Hook and Loop Structure
2y 5m to grant Granted Apr 14, 2026
Patent 12600007
POLISHING APPARATUS AND POLISHING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12590863
VIBRATION ANALYSIS SYSTEM AND VIBRATION ANALYSIS METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591078
INFORMATION PROCESSING APPARATUS, RADAR APPARATUS, METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12590987
GENERATING A VIRTUAL SENSOR SIGNAL FROM A PLURALITY OF REAL SENSOR SIGNALS
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
82%
With Interview (+21.2%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 1060 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month