Prosecution Insights
Last updated: April 19, 2026
Application No. 17/455,511

WORK TOOL CAMERA SYSTEM FOR UTILITY VEHICLES

Non-Final OA §103§112
Filed
Nov 18, 2021
Examiner
YAO, JULIA ZHI-YI
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Deere & Company
OA Round
5 (Non-Final)
68%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
47 granted / 69 resolved
+6.1% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
29 currently pending
Career history
98
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 69 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on August 25th, 2025, has been entered. Response to Amendment Claims 1, 3-4, 8-9, and 22 in the claim set filed March 27th, 2025, were pending for examination in the Application No. 17/455,511 filed November 18, 2021. In the remarks and amendments received on August 25th, 2025, claims 1, 3, 8, and 22 are amended, claims 2, 5-7, and 10-21 remain cancelled, and claim 23 is added. Accordingly, claims 1, 3-4, 8-9, and 22-23 are currently pending for examination in the application. Response to Arguments Applicant’s arguments filed August 25th, 2025, regarding the rejections of the claims have been fully and completely considered but are moot because the arguments do not apply to the new combination of the references being used in the current rejection below. The arguments, which have not been rendered moot by the new combination of the references in light of Applicant’s newly submitted amendments, have been addressed below. Arguments towards 35 U.S.C. § 112(f) Interpretations The examiner respectfully disagrees with Applicant’s assertion that the phrases “the first imaging apparatus” and “the second imaging apparatus” are not subjected to a 35 U.S.C. § 112(f) Interpretation because the phrases “recite sufficient structure (i.e. camera) and are not subject to a 35 U.S.C. § 112(f) Interpretation” (pg. 5 of Applicant’s Remarks). The claims do not explicitly recite said imaging apparatuses to include cameras nor does the instant Specification explicitly define the imaging apparatuses to include the structure of only cameras (e.g., para. [033] recites “imaging apparatuses including stereo cameras, lidar, radar, or other similar devices”). The examiner additionally notes that it is improper to import limitations from the Specification into the claims. The phrases “the first imaging apparatus” and “the second imaging apparatus” in claim 1 meet the three-prong test for interpretation under 35 U.S.C. § 112(f) explained in MPEP § 2181, subsection I, as follows: (A) each of the phrases uses the generic placeholder “apparatus” for performing the claimed function of imaging a respective image (i.e., the “first image” and “second image”); (B) the generic placeholder is modified by functional language of capturing an image linked by the linking word “to” (e.g., “…to capture a first image” and “…to capture a second image”); and (C) the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function (e.g., the claim does not explicitly recite the apparatuses to include, for example, cameras as claimed by Applicant). Thus, the phrases “the first imaging apparatus” and “the second imaging apparatus” remain subjected to interpretation under 35 U.S.C. § 112(f). Rejection of Claim 1 under 35 U.S.C. § 103 The examiner respectfully disagrees with Applicants assertion that Yamashita and Tsuji, in combination, do not teach or suggest omitting the work tool from the modified image as claimed because Tsuji’s teaching of generating a “see-through” image by extracting a blocked region and synthesizing a perspective image as cited in col. 12, lines 45-67, and col. 13, lines 1-17 of Tsuji “do not result in a modified image that omits the work tool in the manner claimed” (pg. 7 of Applicant’s Remarks). As detailed in the rejection below, synthesizing a “perspective image” based on the combination of a first image (“image data CDT obtained by camera 40”) and a second image (“image data ICDT obtained by camera 45”) is generating a modified image using perspective transformation (i.e., perspective image synthesization) based on the combination of the first and second images, where lines 26-41 in col. 12 of Tsuji further teach that the “see-through” image is omitting the work tool as claimed as the work tool is omitted by digitally removing pixels corresponding to the work tool such that only the material and the environment are visible in the modified image (see the rejection of claim 1 below). Rejection of Claim 8 under 35 U.S.C. § 103 The examiner respectfully disagrees with Applicant’s assertion that the cited references “do not teach or suggest the specific orientation of the imaging apparatuses as claimed,” because “Yamashita describes multiple cameras, but does not disclose or suggest the claimed arrangement of facing directions relative to the main frame” (pgs. 7-8 of Applicant’s Remarks). As detailed in the current rejection below, the cited portions of Yamashita further disclose that at least the first imaging apparatus is oriented to face a front of end of the main frame as the first imaging apparatus captures a “rear part of the work implement 3” (referring to at least element 31 Fig. 4A, paras. [0047] and [0061]) and that at least a second imaging apparatus is oriented to face a rear end of the main frame as the second imaging apparatus captures a “front surface of the work implement 3” (referring to element 35 Fig. 1, paras. [0048] and [0062]). Claim Objections Claims 1 and 3 are objected to because of the following informalities failing to comply with 37 CFR 1.71(a) for "full, clear, concise, and exact terms" (see MPEP § 608.01(m)): In claim 1, “an arm member of swing circle” should be “an arm member of a swing circle”; In claim 3, “the first portion of the work tool” should be “the [[first]]rear portion of the work tool” to maintain consistency in claim language to claim 1; and In claim 3, “the second portion of the work tool” should be “the [[second portion]]front surface of the work tool” to maintain consistency in claim language to claim 1. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier, as explained in MPEP § 2181, subsection I (note that the list of generic placeholders below is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph): A. The Claim Limitation Uses the Term "Means" or "Step" or a Generic Placeholder (A Term That Is Simply A Substitute for "Means") With respect to the first prong of this analysis, a claim element that does not include the term "means" or "step" triggers a rebuttable presumption that 35 U.S.C. 112(f) does not apply. When the claim limitation does not use the term "means," examiners should determine whether the presumption that 35 U.S.C. 112(f) does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term "means"). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f): "mechanism for," "module for," "device for," "unit for," "component for," "element for," "member for," "apparatus for," "machine for," or "system for." Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Mass. Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media, 161 F.3d at 704, 48 USPQ2d at 1886–87; Mas-Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir. 1998). Note that there is no fixed list of generic placeholders that always result in 35 U.S.C. 112(f) interpretation, and likewise there is no fixed list of words that always avoid 35 U.S.C. 112(f) interpretation. Every case will turn on its own unique set of facts. Such claim limitation(s) is/are: The following claim limitations are implemented on the same hardware disclosed in paragraph [042] (e.g., “The first imaging apparatus 50 and the second imaging apparatus 52 may each comprise a camera.”): "first imaging apparatus…to capture a first image…" in independent claim 1 and thus claim 8 is similarly interpreted; and "second imaging apparatus…to capture a second image…" in independent claim 1 and thus claim 8 is similarly interpreted. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim(s) 3, the claim recites the limitations “the amount of material” and “the difference in pixel regions”. There is insufficient antecedent basis for these limitations in the claim. For examination purposes, these limitations in the claim will be read as “[[the]]an amount of material” and “[[the]]a difference in pixel regions”, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4, and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Yamashita et al. (Yamashita; US 2017/0050566 A1, 2017) in view of Tsuji et al. (Tsuji; US 10,435,868 B2, 2019), and further in view of Ono (Ono-085; US 2020/0378085 A1). Regarding claim 1, Yamashita discloses a utility vehicle comprising: a main frame (referring to element 2 in Fig. 1, para. [0043], recites PNG media_image1.png 357 866 media_image1.png Greyscale PNG media_image2.png 655 832 media_image2.png Greyscale , where “vehicle body 2” is a main frame of a utility vehicle (i.e., “work vehicle 1”)); a work tool (referring to elements 3 and 8 in Fig. 1—see above for figure—, para. [0045], recites [0045] “The work implement 3 is provided in front of the engine room 6. The work implement 3 includes a blade 8, hydraulic cylinders 9 and II, and an arm 10. The blade 8 is supported on the vehicle body 2 via the arm 10. The blade 8 has a front surface plate 8 a. The front surface plate 8 a has the shape of a curved surface that is recessed with respect to the forward direction of the vehicle. A cutting edge 8 b is provided on the lower end part of the front surface plate 8 a. The blade 8 is provided in a manner that allows for swinging in the up-down direction. The hydraulic cylinders 9 and II change the orientation of the blade 8. A large portion of the front surface plate 8 a is tilted to the rear and the front surface of the front surface plate 8 a can be seen in a plan view when the blade 8 is in a normal orientation. The work implement 3 is used for work such as excavating, earth moving, or ground leveling. The work implement 3 excavates the ground surface and the like with the cutting edge 8 b of the blade 8 and loads and carries the excavated sand and dirt and the like on the front surface plate 8 a.” , where “work implement 3 includes a blade 8” is a work tool comprising a blade configured to move relative to the main frame to move a material (i.e., “sand” and/or “dirt”)); a first imaging apparatus rigidly mounted to the main frame and oriented to capture a first image of a rear portion of the work tool (referring to Fig. 4A, paras. [0047] and [0061], recite [0047] “Specifically, the first capturing image unit 21 includes first to fourth image capturing devices 31 to 34 which are attached to the vehicle body 2. The first to fourth image capturing devices 31 to 34 are fish-eye lens cameras. As illustrated in FIG. 1, the first image capturing device 31 is attached to a front part of the vehicle body 2. The first image capturing device 31 captures images in front of the vehicle body 2…The image capturing directions of the first to fourth image capturing devices 31 to 34 face outward from the vehicle body 2, and the depression angle is 0 degrees.” [0061] “…Because an image of the rear part of the work implement 3 (see FIG. 4A) is photographed in the front image Im1, a portion in front of the work implement 3 including the front surface of the work implement 3 becomes a blind spot. As a result, an image Imw (diagonal line hatched portion in FIG. 6) in front of the work implement 3 is omitted or not displayed accurately in the surroundings composite image Is1 as illustrated in FIG. 6.” PNG media_image3.png 689 511 media_image3.png Greyscale , where “front image lm1” is a first image captured by a first imaging apparatus (i.e., “first capturing image unit 21”, such as “first image capturing device 31”) rigidly mounted to the main frame (i.e., “attached to a front part of the vehicle body”) aimed at a rear portion of the work tool (i.e., a “rear part of the work implement 3”)); a second imaging apparatus rigidly mounted to a (referring to element 35 in Fig. 1, paras. [0048] and [0062], recite [0048] “The second image capturing unit includes a fifth image capturing device 35…The fifth image capturing device 35 captures images in a direction inclined toward the vehicle with respect to the vertical direction as seen in a side view of the vehicle. The image capturing surface area of the front surface of the blade 8 can be increased by capturing images in the direction inclined toward the vehicle.” [0062] “FIG. 7 illustrates an example of the work implement front image Im5 captured by the second image capturing unit 22. As illustrated in FIG. 7, the work implement front image Im5 includes an image Ima depicting the front surface of the work implement 3, and an image Imb depicting the ground surface located in front of the work implement 3. When the actual work vehicle 1 is carrying a work object X such as sand and dirt as illustrated in FIG. 8, an image Imx depicting the work object X as illustrated in FIG. 7 is included in the work implement front image Im5.” , where “work implement front image lm5” is a second image captured by a second imaging apparatus (i.e., “second image capturing unit 22”, such as “fifth image capturing device 35”) rigidly mounted (e.g., “attached”) at a surface of a frame (i.e., “fifth image capturing device 35 is supported on an arm member 36…[which] is attached to vehicle body 2") and oriented at a front surface of a work tool (i.e., a “front surface of the work implement 3”)); an electronic processor in communication with the first imaging apparatus and the second imaging apparatus, wherein the electronic processor is configured to generate a modified image (referring to Fig. 2, para. [0052], recites [0052] “The controller 25 is configured with a computation device such as a CPU. The controller 25 generates the display image Is from the images captured by the first capturing image unit 21 and the second image capturing unit 22 . The generation of the display image Is is explained in greater detail below.” PNG media_image4.png 721 1116 media_image4.png Greyscale , where “controller 25” is an electronic processor in communication with the first imaging apparatus (i.e., “first capturing unit 21”) and the second imaging apparatus (i.e., “second capturing unit 22”) to generate a modified image (i.e., “display image Is”)); and a display for displaying the modified image of the material being moved by the work tool, wherein the modified image is based on a combination of the first image and the second image (para. [0069], recites [0069] “The display unit 27 displays the display image Is. FIG. 11 illustrates an example of the display image Is. As illustrated in FIG. 11, the display image Is displays the work vehicle 1 and the surroundings thereof in a three-dimensional manner as seen diagonally from in front and from above. The display image Is includes the vehicle model M1, the surroundings composite image Is1, and the work implement composite image Is2 generated as described above. Specifically, the condition of the surroundings of the work vehicle 1 captured by the first capturing image unit 21 is displayed as the surroundings composite image Is1 of the surroundings of the vehicle model M1 in the display image Is. Further, the condition of the front of the blade 8 captured by the second image capturing unit 22 is displayed in front of the blade of the vehicle model M1 and on the ground surface therebelow as the work implement composite image Is2. The display image Is is updated in real time and displayed as a video.” , where “display unit 27” is a display for displaying a modified image (i.e., “display image Is”) of the material being moved by the work tool based on a combination of the first image and the second image (i.e., the “display image Is” including the “surroundings composite image Is1”—which is generated from the first image of the first imaging apparatus (i.e., “surroundings of the work vehicle 1 captured by the first capturing image unit 21”)—and the “work implement composite image Is2”—which is generated from the second image of the second imaging apparatus (i.e., “the front of the blade 8 captured by the second image capturing unit 22”)—is generating a modified image based on a combination of the first image and the second image)). Where Yamashita does not specifically disclose …the modified image is based on a combination of the first image and the second image using perspective transformation, and the modified image omits the work tool by digitally removing pixels corresponding to the work tool based on a model of the work tool, such that only the material and the environment are visible in the modified image, Tsuji teaches in the same field of endeavor of improving the visibility of material moved by a work tool of a utility vehicle …the modified image is based on a combination of the first image and the second image using perspective transformation, and the modified image omits the work tool by digitally removing pixels corresponding to the work tool based on a model of the work tool, such that only the material and the environment are visible in the modified image (column 12, lines 45-67, and column 13, lines 1-17, recite PNG media_image5.png 918 918 media_image5.png Greyscale PNG media_image6.png 718 917 media_image6.png Greyscale , where the “perspective image as being synthesized with image data CDT” is using perspective transformation to generate a modified image based on a combination of a first image (i.e., “image data CDT obtained by camera 40”) and a second image (i.e., “image data ICDT obtained by camera 45”); and the modified image omits the work tool by digitally removing pixels corresponding to the work tool based on a model of the work tool such that only the material and the environment are visible in the modified image (i.e., displaying the “blocked region of bucket 7 in image data CDT, with the contour line of the blocked region,” such as seen in Fig. 8 below, is displaying a modified image that omits a work tool—i.e., “bucket 7” or “work implement 3”), such as recited in column 12, lines 26-41, referring to Fig. 8, PNG media_image7.png 632 919 media_image7.png Greyscale PNG media_image8.png 387 655 media_image8.png Greyscale , where the outline of the work tool depicting the material hidden by the work tool is omitting the work tool by digitally removing pixels (e.g., making the work tool see-through as seen in Fig. 8 above) corresponding to the work tool based on a model of the work tool (e.g., the shape or “outer shell” of the work tool) such that only the material (e.g., the “blocked region” as depicted in Fig. 8 above) and the environment (e.g., background of the image) are visible in the modified image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Yamashita to incorporate generating and displaying a modified image based on a combination of the first image and the second image using perspective transformation that omits a work tool by digitally removing pixels corresponding to the work tool based on a model of the work tool, such that only the material and the environment are visible in the modified image to allow for an operator to operate the work tool more efficiently as taught by Tsuji (column 2, lines 4-14, recite PNG media_image9.png 437 909 media_image9.png Greyscale ). Where Yamashita in view of Tsuji does not specifically disclose a utility vehicle comprising: a main frame; an articulated frame movably coupled with the main frame; a draft frame movably coupled with the articulated frame; a circle frame movably coupled with the draft frame; a work tool movably coupled with the draft frame and configured to move relative to the main frame to move a material, wherein the work tool comprises a blade; and a second imaging apparatus rigidly mounted to a bottom surface of the draft frame and oriented to capture a second image of a front surface of the work tool, wherein the second imaging apparatus is not supported by an arm member of a swing circle; Ono-085 teaches in the same field of endeavor of imaging material in front of a work tool of a utility vehicle a main frame; an articulated frame movably coupled with the main frame; a draft frame movably coupled with the articulated frame; a circle frame movably coupled with the draft frame; a work tool movably coupled with the draft frame and configured to move relative to the main frame to move a material, wherein the work tool comprises a blade (Referring to Fig. 8, PNG media_image10.png 435 964 media_image10.png Greyscale depicts a motor grader, which comprises of a main frame (“vehicular body frame 2”), an articulated frame (“front frame 22”), a draft frame (“draw bar 40”), a circle frame (“swing circle 41”), and a work tool (“blade 42”)); and a second imaging apparatus rigidly mounted to a bottom surface of the draft frame and oriented to capture a second image of a front surface of the work tool (referring to Fig. 8 above, para. [0064], recites [0064] “FIG. 8 is a schematic diagram showing a range of image pick-up by image pick-up apparatus 60 shown in FIG. 7…Image pick-up apparatus 60 can pick up an image of soil built up on the front surface of blade 42 while motor grader 1 travels forward.” , where “image pick-up apparatus 60” is a second imaging apparatus rigidly mounted to the bottom surface of the draft frame (“draw bar 40”) and oriented to capture a second image of a front surface of the work tool (e.g., “front surface of blade”) as depicted in Fig. 8 above), wherein the second imaging apparatus is not supported by an arm member of a swing circle (para. [0079], recites [0079] “Blade 42 pivots with revolution of swing circle 41 so that blade angle θ varies. According to such a construction that image pick-up apparatus 60 is fixed to swing circle 41 which rotates, image pick-up apparatus 60 and blade 42 revolve together with revolution of swing circle 41. Since positions of image pick-up apparatus 60 and blade 42 relative to each other do not vary in spite of variation in blade angle θ, image pick-up apparatus 60 can reliably pick up an image of the front surface of blade 42.” , where para. [0079] further teaches the second imaging apparatus (“image pick-up apparatus 60”) is “fixed to swing circle 41” is the second imaging apparatus not supported by an arm member of a swing circle (e.g., “draw bar 40”)). Since Yamashita, Tsuji, and Ono-085 each disclose benefits for imaging the front of a work tool of a utility vehicle to allow an operator of the utility vehicle to view the conditions of the area in front of the work tool and/or the state of the work tool when moving material (Yamashita teaches in paras. [0006-0007] and [0012], [0006] “A work implement such as a blade may be mounted on a vehicle body in a work vehicle such as a bulldozer. It is preferable to be able to grasp the conditions of the area in front of the work implement in order to grasp the working conditions for such a work vehicle…” [0007] “However, because the area in front of the work implement is a blind spot to the operator inside the operating cabin, it is difficult to directly grasp the amount of earth in a visual manner…” [0012] “…the work implement composite image depicting the area in front of the work implement are synthesized in the display image in the display system for the work vehicle according to the present aspect. As a result, an operator is able to easily understand the conditions of the surroundings of the work vehicle and the conditions of the area in front of the work implement from the display image.” , where Tsuji similarly teaches in column 12, lines 26-41, referring to Fig. 8—see citation in claim 1, limitation “…the modified image is based on…,” above—; and Ono-085 also similarly teaches in para. [0059], [0059] “For example, by showing a picked-up image of the front surface of blade 42 on a monitor provided in cab 3, an operator who is on board cab 3 can visually recognize soil built up on the front surface of blade 42. The operator can optimally adjust blade angle θ (FIG. 5) in consideration of a condition of running of motor grader 1, current topography in front of motor grader 1, and an amount of soil held in blade 42 at the current time point. A revolving operation of blade 42 can also automatically be controlled based on the amount of soil built up on the front surface of blade 42.” ), it would have been obvious to one of ordinary skill in the art before the effective filing date of the present invention to apply the modified image generation method of Yamashita in view of Tsuji to the utility vehicle of a motor grader (which comprises of the parts claimed above) including rigidly mounting the second imaging apparatus to the bottom surface of a draft frame of a motor grader, wherein the second imaging apparatus is not supported by an arm member of a swing circle, to allow the operator of a utility vehicle, which moves materials to easily visualize the area in front of the work tool of such utility vehicles, as taught by each of Yamashita, Tsuji, and Ono-085, and to reliably pick up an image of the front surface of the blade as taught by Ono-085 above. Regarding claim 4, Yamashita, as modified by Tsuji and Ono-085, discloses the utility vehicle of claim 1, wherein Yamashita further discloses the utility vehicle of claim 1 further comprising an operator cab supported by the main frame, wherein the work tool is positioned longitudinally forward of the operator cab (referring to element 5 in Fig. 1—see figure in claim 1—, para. [0044], recites [0044] “The vehicle body 2 has a travel device 4, an operating cabin 5, and an engine room 6. The travel device 4 is a device for causing the work vehicle 1 to travel. The travel device 4 has a crawler belt 7. The work vehicle 1 travels due to the crawler belt 7 being driven. The engine room 6 is disposed in front of the operating cabin 5. An engine and a hydraulic pump and the like, which are not included in the figures, are disposed inside the engine room 6.” , where “operating cabin 5” is an operator cab supported by the main frame (i.e., “vehicle body 2”); and as depicted in Fig. 1—see figure in claim 1—, the work tool (i.e., “work implement 3”) is positioned longitudinally forward of the operator cab (i.e., “operating cabin 5”)). Regarding claim 8, Yamashita, as modified by Tsuji and Ono-085, discloses the utility vehicle of claim 1, wherein Yamashita further discloses the first imaging apparatus is oriented to face a front end of the main frame and the second imaging apparatus is oriented to face a rear end of the main frame (referring to at least element 31 Fig. 4A, paras. [0047] and [0061] recite that the first imaging apparatus faces a front end of the main frame (i.e., the first imaging apparatus capturing a “rear part of the work implement 3” is a first imaging apparatus facing a front end of the main frame)—see claim 1, limitation “a first imaging apparatus rigidly mounted to…,” above—; and referring to element 35 in Fig. 1, paras. [0048] and [0062] recite that the second imaging apparatus faces a rear end of the main frame (i.e., the second imaging apparatus capturing a “front surface of the work implement 3” is a second imaging apparatus facing a rear end of the main frame)—see claim 1, limitation “a second imaging apparatus rigidly mounted to…,” above), and both are fixed to their respective frames without intermediary support structures (referring to at least element 31 in Fig. 4A, paras. [0047] and [0061]—see citations in claim 1, limitation “a first imaging apparatus rigidly mounted to…,” above—, and referring to element 35 in Fig. 1, paras. [0048] and [0062]—see citations in claim 1, limitation “a first imaging apparatus rigidly mounted to…,” above—, where directly affixing the imaging apparatus onto their respective frames as depicted in the cited figures is fixing both the imaging apparatuses to their respective frames without intermediary support structures). Regarding claim 9, Yamashita, as modified by Tsuji and Ono-085, discloses the utility vehicle of claim 1, wherein Tsuji further teaches the modified image includes one or more of an outline of the work tool (referring to Fig. 8, column 12, lines 26-41—see citation in claim 1, limitation “…the modified image is based on…,” taught by Tsuji, above—, where the “skeletal contour line [of bucket 7 of work implement 3] is shown with a chain dotted line” is the modified image (i.e., the synthesized “perspective image”) including at least an outline (i.e., “skeletal contour line” or “chain dotted line”) of the work tool (i.e., “work implement 3” including “bucket 7”)). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yamashita, as modified by Tsuji and Ono-085, as applied to claim 1 above, and further in view of Hageman et al. (Hageman-846; US 2020/0334846 A1), and further more in view of Hageman et al. (Hageman-653; US 2020/0325653 A1). Regarding claim 3, Yamashita, as modified by Tsuji and Ono-085, discloses the utility vehicle of claim 1, wherein Yamashita further discloses the first portion of the work tool is a back side of the blade and the second portion of the work tool is a front side of the blade (referring to Fig. 4A, paras. [0047] and [0061] recite that the first portion of the work tool is a back side of the blade (i.e., “rear part of the work implement 3”)—see claim 1, limitation “a first imaging apparatus rigidly mounted to…,” above—; and referring to element 35 in Fig. 1, paras. [0048] and [0062] recite that the second portion of the work tool is a front side of the blade (i.e., “front surface of the work implement 3”)—see claim 1, limitation “a second imaging apparatus rigidly mounted to…,” above), and wherein the modified image further includes a visual(paras. [0062-0063], recites [0062] “FIG. 7 illustrates an example of the work implement front image Im5 captured by the second image capturing unit 22. As illustrated in FIG. 7, the work implement front image Im5 includes an image Ima depicting the front surface of the work implement 3, and an image Imb depicting the ground surface located in front of the work implement 3. When the actual work vehicle 1 is carrying a work object X such as sand and dirt as illustrated in FIG. 8, an image Imx depicting the work object X as illustrated in FIG. 7 is included in the work implement front image Im5.” [0063] “The second image generation unit 29 generates the work implement composite image Is2 by projecting the work implement front image Im5 captured by the second image capturing unit 22 onto a second projection plane A2 as illustrated in FIG. 9… ” , where the composite image includes an image capturing “work object X” such as “sand and dirt” moved by the “work implement” is the modified image further including a visual of the material moved by the blade along a length of the blade). Where Yamashita, as modified by Tsuji and Ono-085, does not specifically disclose …a visual indicator of the amount of material moved…; Hageman-846 teaches in the same field of endeavor of imaging a work tool of a utility vehicle …a visual indicator of the amount of material moved… (paras. [0052], recites [0052] “Returning to FIG. 4, the image data representing material in the bucket can be provided to the volume computation module 420, which can compare that image data to the configuration file to output an estimate of the volume of the material in the bucket. The estimate of the volume can be provided to the user interface 406, which may be a display device that can output a visual representation of the volume estimation. FIG. 9 depicts an example of a point cloud of material 902 in a bucket 904 as processed from a camera image 906 obtained from a stereo camera. FIG. 10 depicts an example of the user interface 406 on a display device 1002 according to some aspects of the present disclosure. The user interface 406 can include a bar chart 1004 depicting a history of volumes of material calculated from previous scoops and a three-dimensional representation 1006 of a point cloud or similar depiction of a current scoop.” , where the “visual representation of the volume estimation” is a visual indicator of the amount of material moved by a work tool (e.g., a “bucket”)). Since Hageman-846 discloses estimating a volume of material of a work tool can be applied to any type of earth-moving work vehicle including a scraper (e.g., a motor grader; para. [0074], recites [0074] “Examples of the present disclosure can be implemented using any type of work vehicle, such as an excavator or scraper. One example of a scraper 1900 is shown in FIG. 19. The scraper 1900 can be a piece of equipment used for earthmoving. The scraper 1900 can include a container 1902 and a cutting edge that can be raised or lowered. As the scraper 1900 moves along a surface, the cutting edge can scrape up earth and fill the container 1902.” ) such as the motor grader taught by Ono-085 (Fig. 8—see the rejection of claim 1 above), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Yamashita, as modified by Tsuji and Ono-085, to incorporate a visual indicator of the amount of material moved by the blade along a length of the blade to improve the estimation of material moved by a work vehicle for improving the productivity of material moved by the work vehicle as taught by Hageman-846 (para. [0028], recites [0028] “Using certain aspects, the productivity of material moved by a work vehicle or a group of work vehicles at a work site can be monitored. Visual sensors, such as 3D sensors or laser sensors, can be used to improve volume estimation accuracy and ease. Costs and inconvenience can be reduced, as compared to other volume estimation solutions, such as position sensors and weighting sensors.” ). Where Yamashita, as modified by Tsuji, Ono-085, and Hageman-846, does not specifically disclose …the amount of material moved by the blade along a length of the blade, calculated based on the difference in pixel regions corresponding to the material in the first and second images; Hageman-653 teaches in the same field of endeavor of imaging a work tool of a utility vehicle …the amount of material moved by the blade along a length of the blade, calculated based on the difference in pixel regions corresponding to the material in the first and second images (paras. [0054-0055] and [0057], recite [0054] “An example volume sensor includes, but is not limited to, a three-dimensional (3D) sensor, such as a stereo camera or a laser scanner, that captures images (or data) of contents in container 128 that are, at least in part, indicative of a volume of the contents. In one example, 3D sensor 150 includes a lidar array that senses heights and volumes of points that correspond with the contents in container 128.” [0055] “As discussed in further detail below, to sense the volume, one example process includes measuring 3D points, with the 3D sensor, that represent the surface of material carried by the container of a work machine. Briefly, the 3D points that correspond to the material carried by the container can be processed to generate a 3D point cloud that is compared to 3D points (or other model) that correspond to the container 128, to determine a volume of the contents.” [0057] “In one example, the volume of material can be calculated using the 3D points corresponding to the material carried by the container using (i) the orientation or location of the carrier relative to the sensor and (ii) a 3D shape of the container. For example, the volume can be calculated as a difference in the surface of the material in container 128 from a reference surface (e.g., the container interior) that represents a known volume.” , where calculating the “volume of material” of the work tool (e.g., a “container”) based on a “compar[ing] …3D points (or other model) that correspond to the container” to “3D points that correspond to the material carried by the container” is a visualization of the amount of material moved by the work tool based on a difference (i.e., comparison) in pixel regions (e.g., 3D points) corresponding to the material in multiple images (e.g., “capture[d] images”)). Since Hageman-653 discloses estimating a volume of material of a work tool can be applied to any type of earth-moving work vehicle including a scraper (e.g., a motor grader; para. [0034], recites [0034] “However, it is noted that examples described herein can be implemented using any type of earth-moving vehicle or machine, such as an excavator or scraper. These illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. …” ) such as the motor grader taught by Ono-085 (Fig. 8—see the rejection of claim 1 above), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Yamashita, as modified by Tsuji, Ono-085, and Hageman-846, to incorporate a visual indicator of the amount of material moved by the blade along a length of the blade calculated based on the difference in pixel regions corresponding to the material in the first and second images to better estimate the amount of material moved by the work vehicle for improving the productivity of material moved by the work vehicle as taught by Hageman-653 (para. [0045], recites [0045] “Further, the productivity of material moved by a work vehicle or a group of work vehicles at a work site can be monitored. Volume sensors can be used to improve volume estimation accuracy and ease. Costs and inconvenience can be reduced, as compared to other volume estimation solutions, such as position sensors and weighting sensors.” ). Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Yamashita, as modified by Tsuji and Ono-085, as applied to claim 1 above, and further in view of Hageman et al. (Hageman-653; US 2020/0325653 A1). Regarding claim 22, Yamashita, as modified by Tsuji and Ono-085, discloses the utility vehicle of claim 1, wherein Hageman-653 further teaches in the same field of endeavor of imaging a work tool of a utility vehicle the processor further estimates a volume of material moved by the work tool based on a three-dimensional reconstruction using the combined first and second images and a stored geometric model of the work tool (paras. [0054-0055] and [0057]—see citations in claim 3 above—, where calculating the “volume of material” of the work tool (e.g., a “container”) based on “compar[ing] …3D points (or other model) that correspond to the container” to “3D points that correspond to the material carried by the container” or “calculated as a difference in the surface of the material in container 128 from a reference surface (e.g., the container interior) that represents a known volume” is estimating a volume of material moved by the work tool based on a three-dimensional reconstruction (e.g., the “3D points that correspond to the material carried by the container” or “surface of the material in container”) using multiple images (e.g., “capture[d] images”) and a stored geometric model of the work tool (e.g., the 3D model “that correspond to the container” or the “reference surface” of the container)). Since Hageman-653 discloses estimating a volume of material of a work tool can be applied to any type of earth-moving work vehicle including a scraper (e.g., a motor grader; para. [0034], recites [0034] “However, it is noted that examples described herein can be implemented using any type of earth-moving vehicle or machine, such as an excavator or scraper. These illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. …” ) such as the motor grader taught by Ono-085 (Fig. 8—see the rejection of claim 1 above), it would have been obvious to one of ordinary skill in the art before the effective filing date of the presently filed invention to modify the system of Yamashita, as modified by Tsuji and Ono-085, to incorporate obtaining the three-dimensional reconstruction of the volume of material moved by the work tool using the combined first and second images and a stored geometric model of the work tool to improve the productivity of material moved by the work vehicle as taught by Hageman-653 (para. [0045], recites [0045] “Further, the productivity of material moved by a work vehicle or a group of work vehicles at a work site can be monitored. Volume sensors can be used to improve volume estimation accuracy and ease. Costs and inconvenience can be reduced, as compared to other volume estimation solutions, such as position sensors and weighting sensors.” ). Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Yamashita, as modified by Tsuji and Ono-085, as applied to claim 1 above, and further in view of Nishi (US 2024/0018750 A1). Regarding claim 23, Yamashita, as modified by Tsuji and Ono-085, discloses the utility vehicle of claim 1, wherein Nishi teaches in the same field of endeavor of material moving utility vehicles the modified image further displays color indicators based on a level of load experienced by the utility vehicle (para(s). [0287-0288], recite(s) [0287] “For example, in a case where the weight 43e is larger than the remaining load amount 43b, that is, in a case where dumping earth and sand loaded onto the bucket 6 results in overloading, the display control part 66 of the present embodiment may change the display mode for the load amount image 43a
Read full office action

Prosecution Timeline

Nov 18, 2021
Application Filed
Dec 28, 2023
Non-Final Rejection — §103, §112
Apr 05, 2024
Response Filed
May 13, 2024
Final Rejection — §103, §112
Aug 19, 2024
Request for Continued Examination
Aug 26, 2024
Response after Non-Final Action
Nov 21, 2024
Non-Final Rejection — §103, §112
Mar 27, 2025
Response Filed
May 19, 2025
Final Rejection — §103, §112
Aug 25, 2025
Request for Continued Examination
Aug 29, 2025
Response after Non-Final Action
Sep 29, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597169
ACTIVITY PREDICTION USING PORTABLE MULTISPECTRAL LASER SPECKLE IMAGER
2y 5m to grant Granted Apr 07, 2026
Patent 12586219
Fast Kinematic Construct Method for Characterizing Anthropogenic Space Objects
2y 5m to grant Granted Mar 24, 2026
Patent 12579638
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM FOR PERFORMING DETERMINATION REGARDING DIAGNOSIS OF LESION ON BASIS OF SYNTHESIZED TWO-DIMENSIONAL IMAGE AND PRIORITY TARGET REGION
2y 5m to grant Granted Mar 17, 2026
Patent 12562063
METHOD FOR DETECTING ROAD USERS
2y 5m to grant Granted Feb 24, 2026
Patent 12561805
METHODS AND SYSTEMS FOR GENERATING DUAL-ENERGY IMAGES FROM A SINGLE-ENERGY IMAGING SYSTEM BASED ON ANATOMICAL SEGMENTATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+35.7%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 69 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month