DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1 – 20 are pending in this application.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 02/28/2024 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner.
Applicant has provided an explanation of relevance of cited document(s) JP-2019-057192A on pages 1-2 of the specification.
Applicants have not provided an explanation of relevance of cited document(s) discussed below.
Viswanathan (U.S PreGrant Publication No. 2020/0257921 A1) provides an apparatus, method and computer program product for predicting feature space decay using variational auto-encoder networks. Methods may include: receiving a first image of a road segment including a feature disposed along the road segment; applying a loss function to the feature of the first image; generating a revised image, where the revised image includes a weathered iteration of the feature; generating a predicted image using interpolation between the image and the revised image of a partially weathered iteration of the feature; receiving a user image, where the user image is received from a vehicle traveling along the road segment; correlating a feature in the user image to the partially weathered iteration of the feature in the predicted image; and establishing that the feature in the user image is the feature disposed along the road segment.
Toyota (JP 2020-013537 A) provides a road surface condition estimation device (1) comprises: acquiring means (111) for acquiring behavior information on a behavior of a vehicle (2) from a vehicle; determining means (112) for determining, based on the behavior information, whether or not an abnormal condition is satisfied which is determined based on a specific behavior assumed to be taken by the vehicle when the vehicle encounters a road surface abnormality; and estimating means (112) for estimating the condition of the road surface on the basis of the determination result of the determining means.
Murata (JP 2019-185443 A) provides a road management system 10 that includes: a communication device 13 that receives from a vehicle 40 measurement information 31 on a road surface property, which is measured by the vehicle 40, in association with position information on the vehicle 40 obtained when the road surface property is measured; an analysis module 21 that uses the measurement information 31 on the road surface property within a certain road section over a certain measurement period to analyze a regression line which approximately fits a time-elapsing change in the road surface property within a road section over a measurement period; and a prediction module 22 that predicts a near-future regression line for the road section, which is supposed to be obtained after elapse of the measurement period, on the basis of a transition in a traffic volume within the road section over the measurement period.
Toshiba et al. (JP 2020-147961 A) provide a road maintenance management system comprises: a pavement type determination device that automatically determines a pavement type of a road from an imaged road image and imaging position information; a pavement deterioration determination device that automatically determines a degree of deterioration for each pavement type based on the road images, the imaging position information, and the pavement type determination results; and a repair priority determination device that determines a priority of repairs based on the road images, the imaging position information, the pavement type determination results, and the pavement deterioration determination results.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1 – 20 are directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Claims 1, 19 and 20 are rejected under 35 U.S.C. 101, because the claimed invention directed to abstract idea without significantly more. The claim recites “an image generation system configured to store instructions; and at least one processor configured to execute the instructions to: acquire a road image obtained by imaging of a road; determine a future degradation level of the road; and generate, based on the road image, a predictive image representing road degradation on a surface of the road, in accordance with the future degradation level”.
The claim limitation of “acquire a road image obtained by imaging of a road”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the human activity or mind but for the recitation of generic computer component. That is, other than reciting “processor” nothing in the claim element precludes the step from practically performed in human activity or mind. For example, but for the “processor” language, “acquire” in the content of this claim encompass the user to obtain an captured image of a road.
Similarly, the limitation of “determine a future degradation level of the road”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the human activity or mind but for the recitation of generic computer component. That is, other than reciting “processor” nothing in the claim element precludes the step from practically performed in human activity or mind. For example, but for the “processor” language, “determine” in the content of this claim encompass the user to anticipate if the captured image of the road will have a defect.
Similarly, the limitations of “generate, based on the road image, a predictive image representing road degradation on a surface of the road, in accordance with the future degradation level”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the human activity or mind but for the recitation of generic computer component. That is, other than reciting “processor” nothing in the claim element precludes the step from practically performed in human activity or mind. For example, but for the “processor” language, “generate” in the content of this claim encompass the user to draw on the captured image a score/level of the anticipated defect. If a claim limitations, under its broadest reasonable interpretation, covers performance of the limitation in the human activity but for the recitation of generic computer components, then it falls within the “Certain Methods of Organizing Human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the claim only recites one additional element – using a processor to perform the acquiring, determining and generate steps.
The processor in first step is recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform acquiring, determine and generate steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 - 10, 13, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ryu (English Machine Translation of KR 102167291 B1, Published October 20, 2020) in view of Twumasi-Boakye et al. (U.S PreGrant Publication No. 2022/0363287 A1, hereinafter ‘Twumasi’).
With respect to claim 1, Ryu teaches an image generation system (e.g., a road condition information collection system, page 2 of 12) comprising: at least one memory (e.g. a device memory 150, page 4 of 12) configured to store instructions (e.g., storing program(s), Page 4 of 12); and at least one processor (e.g., a control unit 170, page 4 of 12) configured to execute the instructions to: acquire a road image obtained by imaging of a road (e.g., collecting at least front road surface image from a camera, abstract, pages 3 of 12); determine a condition of the road (e.g., predict a condition of the road, pages 3 – 4 and 6 - 9 of 12); and
generate, based on the road image, a predictive image representing road degradation on a surface of the road, in accordance with the condition (e.g., generate, based on the front road surface image, a dynamic information or a road condition prediction model representing the condition of the road on a surface of the road, in accordance with the condition of the road and image analysis, pages 3 – 4 and 6 - 9 of 12, Figs. 4, 6 & 9); but fails to teach that said condition is specifically a future degradation level of said road.
However, in the same field of endeavor of predicting road conditions, the difference is found in Twumasi. In particular, Twumasi teaches: determine a future degradation level of the road (e.g., determine/predict a future functional condition of the road, abstract, ¶0035 - ¶0036, ¶0046); and
generate, based on the road image, a predictive image representing road degradation on a surface of the road, in accordance with the future degradation level (e.g., generate a driving index or a recommendation, based predicted future functional condition, ¶0026).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the image generation system of Ryu as taught by Twumasi since Twumasi suggested within ¶0035 - ¶0036, ¶0046 that such modification would classify a road based on a level of support offered by the road for autonomous driving operations in order to anticipate a solution.
With respect to claim 2, Ryu in view of Twumasi teaches the image generation system according to claim 1, wherein the predictive image is an image representing road degradation of at least one of cracks, pot holes, rutting, and flatness abnormality (e.g., the image to be displayed/generated includes cracks, pot holes, and other abnormalities, page 4 of 12).
With respect to claim 3, Ryu in view of Twumasi teaches the image generation system according to claim 1, wherein the at least one processor is further configured to execute the instructions to: generate the predictive image using a learning model (e.g., generate the road condition prediction model based on artificial intelligence learning, page 4 of 12).
With respect to claim 4, Ryu in view of Twumasi teaches the image generation system according to claim 1, wherein the at least one processor is further configured to execute the instructions to: generate the predictive image by superimposing a figure representing road degradation on the road image (Figs. 4, 6 & 9).
With respect to claim 5, Ryu in view of Twumasi teaches the image generation system according to claim 1, wherein the at least one processor is further configured to execute the instructions to: acquire a stored image representing road degradation of another road which is relevant to the determined future degradation level; and generate the predictive image based on the acquired stored image and the acquired road image (Ryu teaches the practice of a GPS in which it’s well-known in the art that can be used to find an alternate road in case that the condition of the current road is poor or with heavy traffic, pages 1 – 2 of 12; and perform image analysis again, but for the alternate road, page 4 of 12 ).
With respect to claim to claim 6, Ryu in view of Twumasi teaches the image generation system according to claim 1,wherein the degradation level is an index indicating at least one of cracks, pot holes, rutting, and flatness abnormality (Figs. 3 – 5 – examples of deterioration levels having indexes).
With respect to claim 7, Ryu in view of Twumasi teaches the image generation system according to claim 1,wherein the at least one processor is further configured to execute the instructions to: determine a type of road degradation represented in the predictive image, and generate the predictive image in which the road degradation of a determined type is represented in the predictive image (e.g., the generated dynamic information or the road condition prediction model is based on the determined type of defect (damage), pages 6 and 7 of 12).
With respect to claim 8, Ryu in view of Twumasi teaches the image generation system according to claim 7, wherein the at least one processor is further configured to execute the instructions to determine a type of road degradation represented in the predictive image among straight cracks, tortoise-shell cracks, and pot holes (e.g., the generated dynamic information or the road condition prediction model is based on the determined type of defect (damage), pages 6 and 7 of 12)..
With respect to claim 9, Ryu in view of Twumasi teaches the image generation system according to claim 1,wherein the at least one processor is further configured to execute the instructions to: determine a degradation level predicted based on a parameter related to the road as a future degradation level of the road (e.g., may then determine a level of support offered by the road for autonomous driving operations based on the future functional condition of the road, ¶0026).
With respect to claim 10, Ryu in view Twumasi teaches the image generation system according to claim 9, wherein the at least one processor is further configured to execute the instructions to: determine a degradation level predicted based on a traveling speed of road degradation as a future degradation level of the road (e.g., In a case that the front road surface image is an image of a section from A meter to B meter in front of the vehicle, the time required to reach A meter and B meter in front of the vehicle according to the driving speed of the vehicle (hereinafter referred to as 'a’ seconds and b seconds') and the time required to reach 0.9A meter and 1.1B meter (hereinafter referred to as 'a' second and b second') are calculated and can be changed, pages 4 and 5 of 12).
With respect to claim 13, Ryu in view of Twumasi teaches the image generation system according to claim 1, wherein the at least one processor is further configured to execute the instructions to: a display the predictive image (e.g., the dynamic information or the road condition prediction model representing the condition of the road on a surface of the road is displayed, pages 6 - 7 of 12).
With respect to claim 19, this is a method claim corresponding to the apparatus claim 1. Therefore, this is rejected for the same reasons as the apparatus claim 1.
With respect to claim 20, Ryu notes that the invention may be realized through the execution by a CPU (e.g., a control unit 170, page 4 of 12) of instruction codes (e.g., storing program(s), Page 4 of 12) stored in a non-transitory computer readable storage medium (e.g. a device memory 150, page 4 of 12The further limitations are met by the teachings as previously discussed with respect to claim 1.
Claims 11, 12 and 14 - 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ryu in view of Twumasi and further in view of Jumonji et al. (U.S PreGrant Publication No. 2023/0091376 A1, hereinafter ‘Jumonji’).
With respect to claim 11, Ryu in view Twumasi teaches the image generation system according to claim 1, but neither of them teaches wherein the at least one processor is further configured to execute the instructions to: determine a degradation level at a time point designated by a user as a future degradation level of the road.
However, the mentioned claimed limitations are well-known in the art as evidenced by Jumonji. In particular, Jumonji teaches wherein the at least one processor is further configured to execute the instructions to: determine a degradation level at a time point designated by a user as a future degradation level of the road (e.g., determine a deterioration level at a time point set by a user as a future deterioration level of the road, ¶0063 - ¶0065, Figs. 7 – 9 & 11 – 14).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the image generation system of Ryu in view of Twumasi as taught by Jumonji since Jumonji suggested within Figs. 7 – 9 & 11 – 14 and ¶0063 - ¶0065 that such modification of determining a deterioration level at a point set by the user would define a time period in order to improve the accuracy of the deterioration prediction model time based on a reference information.
With respect to claim 12, Ryu in view of Twumasi teaches the image generation system according to claim 1, but neither of them teaches wherein the at least one processor is further configured to execute the instructions to: determine a degradation level having received an input from a user as a future degradation level of the road.
However, Jumonji teaches wherein the at least one processor is further configured to execute the instructions to: determine a degradation level having received an input from a user as a future degradation level of the road (e.g., these displays are configured to receive an input from the user as a future degradation level of the road, Figs. 7 – 9 & 11 – 14).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the image generation system of Ryu in view of Twumasi as taught by Jumonji since Jumonji suggested within Figs. 7 – 9 & 11 – 14 and ¶0063 - ¶0065 that such modification of determining a deterioration level at a point set by the user would define a time period in order to analyze the image(s) based on the define time period; thereby convenient for the user.
With respect to claim 14, Ryu in view of Twumasi teaches the image generation system according to claim 1, but neither of them teaches wherein the at least one processor is further configured to execute the instructions to: display together with the predictive image, one of the future degradation level, a time predicted to be required for progress of road degradation from a degradation level of the road image to the future degradation level of the predictive image, and the road image.
However, Jumonji teaches wherein the at least one processor is further configured to execute the instructions to: display together with the predictive image, one of the future degradation level, a time predicted to be required for progress of road degradation from a degradation level of the road image to the future degradation level of the predictive image, and the road image (e.g., Fig. 12 – show a screen displaying simultaneously several elements to assist the user; therefore, design choice).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the image generation system of Ryu in view of Twumasi as taught by Jumonji since Jumonji suggested within Figs. 7 – 9 & 11 – 14 and ¶0063 - ¶0065 that such modification of displaying simultaneously several elements would familiarize the user the usage of the buttons in order to avoid confusion; thereby convenient for the user.
With respect to claim 15, Ryu in view of Twumasi teaches the image generation system according to claim 13, but neither of them teaches wherein the at least one processor is further configured to execute the instructions to: the at least one processor is further configured to execute the instructions to: determine the future degradation levels at a plurality of future time points, generate a plurality of the predictive images in which road degradation relevant to the future degradation levels at the plurality of timepoints is represented on the road, and display a plurality of the predictive images.
However, Jumonji teaches wherein the at least one processor is further configured to execute the instructions to: the at least one processor is further configured to execute the instructions to:
determine the future degradation levels at a plurality of future time points,
generate a plurality of the predictive images in which road degradation relevant to the future degradation levels at the plurality of timepoints is represented on the road, and display a plurality of the predictive images (e.g., Figs. 7 – 14 are screens configured to determine deterioration levels at several time points, generate by predicting deterioration level of road at prediction time into deterioration prediction model; and display predicted deterioration level on a map in superimposed manner, Figs 5 – 6, ¶0042, ¶0051 - ¶0052).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the image generation system of Ryu in view of Twumasi as taught by Jumonji since Jumonji suggested within Figs 5 – 6, ¶0042, ¶0051 - ¶0052 that such modification of displaying several predictive images(s) would allow the user to periodically analyze the images; thereby convenient for the user.
With respect to claim 16, Ryu in view of Twumasi teaches the image generation system according to claim 13, but neither of them teaches wherein the at least one processor is further configured to execute the instructions to: generate a graph representing a relationship between time and the degradation level on the road, and display the graph and displays a correspondence relationship between the displayed predictive image and a position in the graph.
However, Jumonji teaches wherein the at least one processor is further configured to execute the instructions to: generate a graph representing a relationship between time and the degradation level on the road, and display the graph and displays a correspondence relationship between the displayed predictive image and a position in the graph (e.g. refer to ¶0044 - ¶0045 and Fig. 4, where the deterioration of the road is predicted at a future point of time by the deterioration curve illustrated in FIG. 4 indicates a crack rate (deterioration degree) along the deterioration curve with time).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the image generation system of Ryu in view of Twumasi as taught by Jumonji since Jumonji suggested within ¶0044 - ¶0045 and Fig. 4 that such modification of displaying a graph would allow the user to analyze and compare data between time and level in order to predict how long will the road last.
With respect to claim 17, Ryu in view of Twumasi teaches the image generation system according to claim 13, but none of them teaches wherein the at least one processor is further configured to execute the instructions to: display a screen for receiving selection of the predictive image to be displayed among a plurality of the predictive images.
However, Jumonji teaches wherein the at least one processor is further configured to execute the instructions to: display a screen for receiving selection of the predictive image to be displayed among a plurality of the predictive images (e.g., these screens are configured to select prediction time point by a scroll bar, ¶0061, Figs. 10 – 14).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the image generation system of Ryu in view of Twumasi as taught by Jumonji since Jumonji suggested within ¶0061, Figs. 10 – 14 that such modification of selecting among the images would allow the user to select the images to display in order to get the desired image; therefore, convenient for the user.
With respect to claim 18, Ryu in view of Twumasi teaches the image generation system according to claim 13, but none of them teaches wherein the at least one processor is further configured to execute the instructions to: display a map for receiving designation of the road on which the predictive image is to be displayed.
However, Jumonji teaches wherein the at least one processor is further configured to execute the instructions to: display a map for receiving designation of the road on which the predictive image is to be displayed (e.g., maps are displayed in order to select prediction time from predicted deterioration level displayed on maps, Figs. 6 – 10 with ¶0054 & ¶0057).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to modify the image generation system of Ryu in view of Twumasi as taught by Jumonji since Jumonji suggested within Figs. 6 – 10 with ¶0054 & ¶0057 that such modification of displaying a map would guide/assist the user where exactly the user is in order to take further action (e.g., perhaps to take another or alternate route).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUAN M GUILLERMETY whose telephone number is (571)270-3481. The examiner can normally be reached 9:00AM - 5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q TIEU can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JUAN M GUILLERMETY/ Primary Examiner, Art Unit 2682