Prosecution Insights
Last updated: April 19, 2026
Application No. 18/035,433

ARTIFICIAL INTELLIGENCE FUNCTIONALITY DEPLOYMENT SYSTEM AND METHOD AND SYSTEM AND METHOD USING SAME

Final Rejection §103§112
Filed
May 04, 2023
Examiner
CROCKETT, JOSHUA BRIGHAM
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Pleora Technologies Inc.
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
13 granted / 18 resolved
+10.2% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
26 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
35.1%
-4.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18/035,433 (the instant application), filed on 05/04/2023. Response to Arguments Claims 1, 4, 10, 12, 20, 23, 28, 29, 32, 33, 37, and 39 are amended. Claims 3, 15, 22, and 34 are canceled. Claims 1, 4, 5, 9-10, 12-14, 18, 20, 23, 24, 28-29, 31-33, 37 and 39 are pending in this action. Applicant’s arguments, see pg. 10, filed 27 January 2026, with respect to the objections to the drawings have been fully considered and are persuasive. Specifically, the applicant corrected the errors causing the objections. The objections to the drawings have been withdrawn. Applicant’s arguments, see pg. 10, filed 27 January 2026, with respect to the objections to the specification have been fully considered and are persuasive. Specifically, the applicant corrected the errors causing the objections. The objections to the specification have been withdrawn. Applicant’s arguments, see pg. 11, filed 27 January 2026, with respect to the objections to claims 37 and 39 have been fully considered and are persuasive. Specifically, the applicant corrected the errors causing the objections. The objections to claims 37 and 39 have been withdrawn. Applicant’s arguments, see pg. 11-12, filed 27 January 2026, with respect to the rejection of claims 4, 10, 12, 15, 22-23, 28-29, 32-34, and 39 under 35 U.S.C. 112(b) have been fully considered and are persuasive. Specifically, the applicant corrected the errors causing the rejections. The rejection of claims 4, 10, 12, 15, 22-23, 28-29, 32-34, and 39 under 35 U.S.C. 112(b) has been withdrawn. Applicant’s arguments, see pg. 12-14, filed 27 January 2026, with respect to the rejection of claims 1, 3-5, 9-10, 12-15, 18, 20, 22-24, 28-29, 31-34, 37 and 39 under 35 U.S.C. 103 have been fully considered and are not persuasive. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies are not recited in the rejected claim. For example: "identifies features from the substrate that are not otherwise identifiable by the acquisition device (e.g. the camera) OR displayable by the consumption device (e.g. a display monitor)" pg. 12 last para., the claims do not contain language about “displayable”; “uses acquisition and consumption devices having communications formats that are not necessarily compatible with such protocols” pg. 14 para. 7, the claims are not explicit that the acquisition and consumption devices cannot be compatible with ethernet, if it is a “not necessarily compatible” limitation then it is not limited to only devices which are not compatible (device which are compatible are within the broadest reasonable interpretation) Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). With regard to the newly amended language in claim 1, Park et al. (US 20210158561 A1; hereafter, Park) discloses: and encode said raw machine vision digital data signal for communication via ethernet-based communication protocols ([0567] and Fig. 40A, an acquisition device, in this example an ultrasound, may send data, raw data as a format option, to system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." As the data is sent over ethernet communication, a person or ordinary skill in the art would understand that the data would necessarily be encoded in a way compatible with ethernet communication protocols, otherwise it could not be sent over ethernet); send said encoded raw machine vision digital data signal via ethernet-based communication protocols over network connection ([0567] and Fig. 40A, an acquisition device, in this example an ultrasound, may send data, raw data as a format option, to system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data may be sent by ethernet from the acquisition device); receive a transcoded machine vision digital data signal via ethernet-based communication protocols over network connection (Transcoded data is understood as data with features marked for recognition, i.e. rendering them detectable, see applicant's specification [0044]. Park, [0568] and Fig. 40A, the data has a detection operation 4008 performed on it which may use an inference engine 4016 of services 3720. The detection of features is understood as transcoding as it marks features. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." The data communication between 3810B and 3720 in Fig. 40A is done by ethernet. Therefore, the data marked by inference engine 4016, transcoded data, is received by ethernet at 3810B); a digital data processor communicatively linked to said digital data interface bus via network connection ([0533]-[0534] and Fig. 38., the components of system 3800 may be communicated by data busses, wireless communication, i.e. network connection, and/or ethernet. Therefore, GPUs 3822, which are processors, may be linked to an interface bus via a network connection to the services 3720 which are used in the above communications of Fig. 40A), and said digital data processor operable to: receive an ethernet-based communication signal comprising the raw machine vision data via network connection ([0567] and Fig. 40A, raw data is received by system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data may be sent by ethernet from the acquisition device and received by the system, including a processor 3822 of Fig. 3800); and send an ethernet-based communication signal comprising said transcoded machine vision digital data signal to said digital data interface bus (Transcoded data is understood as data with features marked for recognition, i.e. rendering them detectable, see applicant's specification [0044]. Park, [0568] and Fig. 40A, the data has a detection operation 4008 performed on it which may use an inference engine 4016 of services 3720. The detection of features is understood as transcoding as it marks features. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data communication between 3810B and 3720 in Fig. 40A is done by ethernet. Therefore, the data marked by inference engine 4016, transcoded data, is sent by ethernet to 3810B. Further, [0156] Ethernet busses may be item 1102 in Fig. 11C which is a bus connecting all of the different components including processors, sensors, display, storage, etc. Therefore, it is understood that data is sent by ethernet to a digital interface bus). Claims 20 and 39 are similarly amended to claim 1 and are similarly taught by Park. Therefore, the applicant’s arguments are not persuasive and the claims remain rejected under 35 U.S.C. 103. Claim Objections Claim 20 is objected to because of the following informalities: Line 8-9 "said raw machine vision digital data signal" should read "said raw data media signal" Line 11 "said encoded raw machine vision digital data signal" should read "said encoded raw data media signal" Line 23-24 "digital data interface" should read "digital data interface bus" Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 20, 23, 24, 28, 29, 31-33, 37, and 39 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 20, in line 8 it is unclear if "said media data signal" is referring to the "media data" acquired in line 6 or to the corresponding "raw data medial signal" output in line 6-7. Therefore, the bounds of the claim are unclear and indefinite. For the purpose of examination, the examiner interprets the ”said media data signal” of line 8 as the output “raw data media signal” of line 6-7. Claims 23, 24, 28, 29, 31-33, and 37 depend on claim 20 and are rejected for failing to remedy the ambiguity of claim 20. Regarding claim 39, line 13-14 states “wherein said artifact is not detectable by a consumption device nor by an acquisition device;”. It is unclear how transcoding data acquired by an acquisition device may render detectable an artifact which is not detectable by the acquisition device. For example, suppose a camera may detect light in the visible spectrum and an artifact is present on an object and the artifact is only detectable in the infrared or ultraviolet light spectrums. The artifact would not be detectable by the camera. Suppose the camera captures an image of the object with the artifact. Because the artifact is not detectable by the camera, no data captured by the camera would contain information related to the artifact. As no information pertaining to the artifact is present in the data no amount of transcoding may make the artifact detectable. The artifact would simply not be present in the data. Therefore, it is unclear how an artifact which is undetectable by an acquisition device may be made detectable in an image captured by the acquisition device. For the purpose of examination the examiner will interpret the claim as using the word “identifiable”, as in the amendment to claim 1, rather than “detectable”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 5, 9-10, 12-14, 18, 20, 23, 24, 28-29, 31-33, 37 and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Prabuwono et al. ("Development of Intelligent Visual Inspection System (IVIS) for Bottling Machine", full reference on PTO-892 included in this action; hereafter, Prabuwono) in view of Chaton (US 20200364842 A1) in further view of Park et al. (US 20210158561 A1; hereafter, Park). Regarding claim 1, Prabuwono discloses: capturing sensed information from a substrate (pg. 1 col. 2 para. 1 and Fig. 1, the system captures images of a system of bottles on a conveyor belt which can be considered a substrate), PNG media_image1.png 384 312 media_image1.png Greyscale the system comprising: a digital data interface bus that is operable to: (pg. 2 col. 1 para. 1 and Fig. 2, the system comprises a software framework with digital data being shared between various components which is understood as a digital data interface bus) PNG media_image2.png 604 308 media_image2.png Greyscale receive said raw machine vision digital data signal from an acquisition device (pg. 2 col. 1 para. 2, the process begins with receiving vision data from an acquisition device), PNG media_image3.png 30 310 media_image3.png Greyscale wherein said raw machine vision digital data signal is in an acquisition device format (a person of ordinary skill in the art would understand that after an image is acquired it is in the data format of the acquisition device); output the transcoded machine vision digital data signal to a consumption device (per the applicant’s specification [0093], the examiner interprets that the consumption device may be a monitor, i.e. a display. pg. 3 col. 2 para. 4, the system generates an output. In describing the output, Prabuwono says "From this, we can see the result of bottle inspection". This shows that the output is sent to a display in order to be seen which is understood as a consumption device), wherein said transcoded machine vision digital data signal is in a consumption device format (pg. 3 col. 2 para. 4, the output is a file consisting of the information of the processing. As this file is "seen" as explained above, the file is understood to be in a consumption device format); PNG media_image4.png 134 310 media_image4.png Greyscale a digital data processor (Fig. 2, a computer host is part of the system. Further, a person of ordinary skill in the art would understand that a computer vision process as described in Prabuwono would be performed on a processor) communicatively linked to said digital data interface bus (Fig. 2, the computer is communicating with other parts of the system which is understood as being linked to the digital data interface bus), PNG media_image5.png 528 264 media_image5.png Greyscale and said digital data processor operable to: identify digital data elements associated with an artifact of said substrate (pg. 3 col. 1 para. 1-2, the system detects an incorrect cap placement. As this is an issue in the cap, it is understood as an artifact of the substrate. Identifying the heights of the caps are understood as identifying digital data elements) PNG media_image6.png 160 312 media_image6.png Greyscale in said raw machine vision digital data signal (Fig. 2, the edge analyzing module of above is in the image processing subsystem. Pg. 2 col. 2 para 1-2, the image processing subsystem receives and analyzes the image data which is understood as the raw machine vision digital data signal), PNG media_image7.png 524 266 media_image7.png Greyscale PNG media_image8.png 230 314 media_image8.png Greyscale wherein said artifact is not identifiable in said raw machine vision digital data signal by said acquisition device (Fig. 2, the acquisition device is a web camera. A person of ordinary skill in the art would recognize a web camera to lack the capability to perform an "identifying" function. Therefore, as the camera does not perform an identifying function then the artifact is not identifiable by the camera); transcode the raw machine vision digital data signal by rendering the artifact detectable (pg. 3 col. 2 para. 4, the output indicates acceptance or rejection of the cap position, i.e. the artifact, which is understood as making the artifact detectable) in the transcoded machine vision digital data signal (pg. 3 col. 2 para. 4, the indication of the acceptability, i.e. the detectable artifact, is in the output which is understood as the transcoded machine vision digital data signal) by said consumption device (pg. 3 col. 2 para. 4, the user can see the result of the inspection which is understood as the artifact being now detectable on the consumption device); PNG media_image4.png 134 310 media_image4.png Greyscale Prabuwono does not disclose expressly a deployment system for transcoding a signal in an existing machine vision system and that the artifact is not detectable by the consumption device. Chaton discloses: A machine vision functionality deployment system for transcoding a raw machine vision data signal in an existing machine vision system ([0013] the invention performs functions which may be understood as transcoding which provide functionality and may be applied to various existing systems) wherein said artifact is not detectable by said consumption device ([0106] the system include one or more monitors which is understood as a consumption device. [0080] defects are highlighted and displayed. This shows that the defects were not detectable by the monitor prior to highlighting the defect); Prabuwono and Chaton are combinable because they are from the same field of endeavor of product inspection in manufacturing environment (Prabuwono, Fig. 1, the examiner interprets the conveyor belt and plastic bottles as a manufacturing environment; Chaton, [0061]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to make the not detectable artifact visible as taught by Chaton. The motivation for doing so would have been "which defect can be output via display of a mask which allows an observer to easily identify the location of a defect on the material surface" (Chaton, [0080]). Therefore, it would have been obvious to combine Chaton with Prabuwono. Prabuwono in view of Chaton does not disclose expressly to encode and send raw machine vision digital data via ethernet-based communication protocols, receive a transcoded machine vision digital data signal via ethernet-based communication protocols, a processor liked to a bus via network connection, receiving an ethernet-based communication signal of the raw machine vision data via network connection, and to send an ethernet-based communication signal comprising the transcoded machine vision digital data signal to the digital interface bus. Park discloses: and encode said raw machine vision digital data signal for communication via ethernet-based communication protocols ([0567] and Fig. 40A, an acquisition device, in this example an ultrasound, may send data, raw data as a format option, to system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." As the data is sent over ethernet communication, a person or ordinary skill in the art would understand that the data would necessarily be encoded in a way compatible with ethernet communication protocols, otherwise it could not be sent over ethernet); send said encoded raw machine vision digital data signal via ethernet-based communication protocols over network connection ([0567] and Fig. 40A, an acquisition device, in this example an ultrasound, may send raw data to system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data may be sent by ethernet from the acquisition device); receive a transcoded machine vision digital data signal via ethernet-based communication protocols over network connection (Transcoded data is understood as data with features marked for recognition, i.e. rendering them detectable, see applicant's specification [0044]. Park, [0568] and Fig. 40A, the data has a detection operation 4008 performed on it which may use an inference engine 4016 of services 3720. The detection of features is understood as transcoding as it marks features. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, as the data communication between 3810B and 3720 in Fig. 40A is done by ethernet. Therefore, the data marked by inference engine 4016, transcoded data, is received by ethernet at 3810B); a digital data processor communicatively linked to said digital data interface bus via network connection ([0533]-[0534] and Fig. 38., the components of system 3800 may be communicated by data busses, wireless communication, i.e. network connection, and/or ethernet. Therefore, GPUs 3822, which are processors, may be linked to an interface bus via a network connection to the services 3720 which are used in the above communications of Fig. 40A), and said digital data processor operable to: receive an ethernet-based communication signal comprising the raw machine vision data via network connection ([0567] and Fig. 40A, raw data is received by system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data may be sent by ethernet from the acquisition device and received by the system, including a processor 3822 of Fig. 3800); and send an ethernet-based communication signal comprising said transcoded machine vision digital data signal to said digital data interface bus (Transcoded data is understood as data with features marked for recognition, i.e. rendering them detectable, see applicant's specification [0044]. Park, [0568] and Fig. 40A, the data has a detection operation 4008 performed on it which may use an inference engine 4016 of services 3720. The detection of features is understood as transcoding as it marks features. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data communication between 3810B and 3720 in Fig. 40A is done by ethernet. Therefore, the data marked by inference engine 4016, transcoded data, is sent by ethernet to 3810B. Further, [0156] Ethernet busses may be item 1102 in Fig. 11C which is a bus connecting all of the different components including processors, sensors, display, storage, etc. Therefore, it is understood that data is sent by ethernet to a digital interface bus). Park is combinable with Prabuwono in view of Chaton because it is in the related field of endeavor of object pose estimation (Park, [0002]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the ethernet-based communications of Park with the invention of Prabuwono in view of Chaton. The motivation for doing so would have been "In at least one embodiment, on-premise installation may allow for high-bandwidth uses (via, for example, higher throughput local communication interfaces, such as RF over Ethernet) for real-time processing" (Park, [0563]). Thus, ethernet communication may allow for increased ability to perform real-time processing. Further, the use of the ethernet-communication represents a simple substitution of one known element, digital communication of Prabuwono in view of Chaton, for another known element, ethernet communication of Park, to yield predictable results, efficient communication between components of the system. Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 1. Regarding claim 4, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 1. Prabuwono in view of Chaton does not disclose expressly that the communication protocol is compatible with ethernet-based communication protocols of one of GigE VisionTM, USB3 VisionTM, Camera LinkTM, MIPITM, or GenlCamTM. Park discloses: The system of claim 1, wherein said ethernet-based communication protocol is compatible with ethernet-based communication protocols of one or more of: GigE VisionTM, USB3 VisionTM, Camera LinkTM, MIPITM, or GenlCamTM ([0190] the communication may include MIPI protocols). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to use the ethernet-based and compatible communication of Park with the invention of Prabuwono in view of Chaton. The motivation for doing so would have been “"for receiving video and input from cameras, a high-speed interface, and/or a video input block that may be used for a camera and related pixel input functions" (Park, [0190]). Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 4. Regarding claim 5, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 1. Prabuwono further discloses: The system of claim 1, wherein said rendering comprises at least one of adding, removing, or updating one or more digital data elements relating to said artifact (pg. 3 col. 2 para. 4, the output includes an indication of the result. The indication may be understood as adding one or more digital data elements, e.g. a label or indication, relating to the artifact). Regarding claim 9, Prabuwono in view Chaton in further view of Park discloses the subject matter of claim 1. Prabuwono does not disclose expressly that the rendering comprising adding digital data to render non-visual features into visual features. Chaton discloses: The system of claim 1, wherein said rendering comprises adding digital data to render non-visual features on said substrate into visual features ([0080] defects are highlighted and displayed. The highlighting is understood as adding digital data to render non-visual feature into visual features) in said transcoded machine vision digital data signal ([0080] the highlighted defects are displayed which is understood as being in the transcoded digital data signal). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the adding of digital data taught by Chaton to the invention of Prabuwono. The motivation for doing so would have been that it "allows to identify the location of a defect," (Chaton, [0080]). Therefore, it would have been obvious to combine Chaton with Prabuwono to obtain the invention as specified in claim 9. Regarding claim 10, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 9. Prabuwono does not disclose expressly that the rendering comprises adding visual elements to the features. Chaton discloses: The system of claim 9, wherein said rendering comprises adding visual elements corresponding to said non-visual features ([0080] defects are highlighted and displayed. The highlighting is understood as adding visual elements to the defects which were previously non-visual). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the adding of digital data taught by Chaton to the invention of Prabuwono. The motivation for doing so would have been that it "allows to identify the location of a defect," (Chaton, [0080]). Therefore, it would have been obvious to combine Chaton with Prabuwono to obtain the invention as specified in claim 10. Regarding claim 12, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 5. Prabuwono further discloses: The system of claim 5, wherein said rendering comprises tagging feature-identifying data with the one or more of the digital data elements (pg. 3 col. 2 para. 4, the output includes an indication of the result. The indication may be understood as tagging feature-identifying data with one or more digital data elements, e.g. a label or indication, relating to the artifact). Regarding claim 13, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 1. Prabuwono in view of Chaton does not disclose expressly that the rendering comprises combining the data from another acquisition device. Park discloses: The system of claim 1, wherein said rendering comprises combining said raw machine vision digital data signal with a further machine vision digital data signal from a further acquisition device ([0214] the acquisition device may be a stereo camera which is comprised of two lenses and combines the incoming data). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the data from another acquisition device as taught by Park. The motivation for doing so would have been that "cameras may be used to capture image data around an entire periphery of vehicle" (Park, [0214]). Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 13. Regarding claim 14, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 1. Prabuwono in view of Chaton does not disclose expressly that the data comprises non-image data. Park discloses: The system of claim 1, wherein the raw machine vision digital data signal comprises, at least in part, non-image data ([0204] the vision data may be radar which is understood as non-image data). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the non-image data taught by Park with the invention of Prabuwono in view of Chaton. The motivation for doing so would have been "for long-range vehicle detection, even in darkness and/or severe weather conditions" (Park, [0204]). Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 14. Regarding claim 18, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 1. Prabuwono in view of Chaton does not disclose expressly that the processor is further configured to send control signals for controlling functionality of said acquisition device. Park discloses: The system of claim 1, wherein said digital data processor is further configured to send control signals for controlling functionality of said acquisition device ([0150] a control unit is associated with the acquisition device for control of the system). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to use the control signal capability of Park in the invention of Prabuwono in view of Chaton. The motivation for doing so would have been "to activate autonomous emergency braking and lane departure warning functions" (Park [0150]). Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 18. Regarding claim 20, Prabuwono discloses: A computer-implemented functionality deployment method, automatically implemented by one or more digital processors (Fig. 2, a computer host is part of the system. Further, a person of ordinary skill in the art would understand that a computer vision process as described in Prabuwono would be performed on a processor), PNG media_image5.png 528 264 media_image5.png Greyscale for deploying functionality on a media acquisition and presentation infrastructure for automated analysis of a substrate (pg. 1 col. 2 para. 1 and Fig. 1, the system captures images of a system of bottles on a conveyor belt which can be considered a substrate), PNG media_image1.png 384 312 media_image1.png Greyscale the method comprising: interfacing, using a digital data interface bus, with at least one media acquisition device that acquires media data (pg. 2 col. 1 para. 1 and Fig. 2, the system comprises a software framework with digital data being shared between various components which is understood as interfacing by a digital data bus. A camera, understood as an acquisition device, is interfaced this way) PNG media_image2.png 604 308 media_image2.png Greyscale and outputs a corresponding raw data media signal to a media consumption device (per the applicant specification [0093], the examiner interprets that the consumption device may be a monitor, i.e. a display. pg. 3 col. 2 para. 4, the system generates an output. In describing the output, Prabuwono says "From this, we can see the result of bottle inspection". This shows that the output is sent to a display in order to be seen which is understood as a consumption device); PNG media_image4.png 134 310 media_image4.png Greyscale intercepting said media data signal (pg. 2 col. 1 para. 1 and Fig. 2, the IVIS system receives the data and operates between the acquisition device and the output which may be understood as intercepting the data); PNG media_image2.png 604 308 media_image2.png Greyscale identifying digital data elements associated with an artifact of said substrate in said raw data media signal (pg. 3 col. 1 para. 1-2, the system detects an incorrect cap placement. As this is an issue in the cap, it is understood as an artifact of the substrate. Identifying the heights of the cap are understood as identifying digital data elements), PNG media_image6.png 160 312 media_image6.png Greyscale applying one or more data transformation function to produce a corresponding transformed media data (pg. 2 col. 2 para. 5, noise is removed from the image which is understood as a transformation function. The examiner notes that the claim does not state which data is being transformed and so the examiner interprets it as any data acquired by the acquisition device); PNG media_image9.png 164 316 media_image9.png Greyscale transcoding said raw data media signal into a transcoded media data signal by rendering said artifact detectable (pg. 3 col. 2 para. 4, the output indicates acceptance or rejection of the cap position, i.e. the artifact, which is understood as making the artifact detectable) in said transcoded media data signal (pg. 3 col. 2 para. 4, the indication of the acceptability, i.e. the detectable artifact, is in the output which is understood as the transcoded machine vision digital data signal) by said media consumption device (pg. 3 col. 2 para. 4, the user can see the result of the inspection which is understood as the artifact being now detectable on the consumption device); and transmitting said transcoded media data signal, via said digital interface bus, so that it is received by the media consumption device (pg. 3 col. 2 para. 4, the output is provided an interface to interact with other software which is understood as transmitting. pg. 2 col. 1 para. 1 and Fig. 2, the system comprises a software framework with digital data being shared between various components which is understood as interfacing by a digital data bus) in place of said raw media data signal (pg. 3 col. 2 para. 4 and Fig. 2, the output is the result of the processing rather than that output being the raw image from the camera). PNG media_image4.png 134 310 media_image4.png Greyscale Prabuwono does not disclose expressly that the artifact is not detectable by the consumption device. Chaton discloses: wherein said artifact is not detectable in said raw data media signal by said media consumption device ([0106] the system include one or more monitors which is understood as a consumption device. [0080] defects are highlighted and displayed. This shows that the defects were not detectable by the monitor prior to highlighting the defect); It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to make the not detectable artifact visible as taught by Chaton. The motivation for doing so would have been "which defect can be output via display of a mask which allows an observer to easily identify the location of a defect on the material surface" (Chaton, [0080]). Therefore, it would have been obvious to combine Chaton with Prabuwono. Prabuwono in view of Chaton does not disclose expressly to encode and send raw machine vision digital data via ethernet-based communication protocols, and to send an ethernet-based communication signal comprising the transcoded machine vision digital data signal to the digital interface bus. Park discloses: and encoding said raw machine vision digital data signal for communication via ethernet-based communication protocols ([0567] and Fig. 40A, an acquisition device, in this example an ultrasound, may send data, raw data as a format option, to system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." As the data is sent over ethernet communication, a person or ordinary skill in the art would understand that the data would necessarily be encoded in a way compatible with ethernet communication protocols, otherwise it could not be sent over ethernet); sending said encoded raw machine vision digital data signal via ethernet-based communication protocols over network connection to the one or more digital data processors ([0567] and Fig. 40A, an acquisition device, in this example an ultrasound, may send data, raw data as a format option, to system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data may be sent by ethernet from the acquisition device. Fig. 38, system 3800 include processors 3822 so therefore, the communication is to at least one or more processors); and sending an ethernet-based communication signal comprising said transcoded machine vision digital data signal to said digital data interface bus (Transcoded data is understood as data with features marked for recognition, i.e. rendering them detectable, see applicant's specification [0044]. Park, [0568] and Fig. 40A, the data has a detection operation 4008 performed on it which may use an inference engine 4016 of services 3720. The detection of features is understood as transcoding as it marks features. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data communication between 3810B and 3720 in Fig. 40A is done by ethernet. Therefore, the data marked by inference engine 4016, transcoded data, is sent by ethernet to 3810B. Further, [0156] Ethernet busses may be item 1102 in Fig. 11C which is a bus connecting all of the different components including processors, sensors, display, storage, etc. Therefore, it is understood that data is sent by ethernet to a digital interface bus). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the ethernet-based communications of Park with the invention of Prabuwono in view of Chaton. The motivation for doing so would have been "In at least one embodiment, on-premise installation may allow for high-bandwidth uses (via, for example, higher throughput local communication interfaces, such as RF over Ethernet) for real-time processing" (Park, [0563]). Thus, ethernet communication may allow for increased ability to perform real-time processing. Further, the use of the ethernet-communication represents a simple substitution of one known element, digital communication of Prabuwono in view of Chaton, for another known element, ethernet communication of Park, to yield predictable results, efficient communication between components of the system. Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 20. Regarding claim 23, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 20. Prabuwono in view of Chaton does not disclose expressly that the communication protocol is compatible with ethernet-based communication protocols of one of GigE VisionTM, USB3 VisionTM, Camera LinkTM, MIPITM, or GenlCamTM. Park discloses: The method of claim 20, wherein said ethernet-based communication protocol is compatible with ethernet-based communication protocols of one or more of: GigE VisionTM, USB3 VisionTM, Camera LinkTM, MIPITM, or GenlCamTM ([0190] the communication may include MIPI protocols). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to use the ethernet-based and compatible communication of Park with the invention of Prabuwono in view of Chaton. The motivation for doing so would have been “"for receiving video and input from cameras, a high-speed interface, and/or a video input block that may be used for a camera and related pixel input functions" (Park, [0190]). Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 20. Regarding claim 24, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 20. Prabuwono further discloses: The method of claim 20, wherein said rendering comprises at least one of adding, removing, or updating one or more digital data elements relating to said artifact (pg. 3 col. 2 para. 4, the output includes an indication of the result. The indication may be understood as adding one or more digital data elements, e.g. a label or indication, relating to the artifact). Regarding claim 28, Prabuwono in view Chaton in further view of Park discloses the subject matter of claim 20. Prabuwono does not disclose expressly that the rendering comprising adding digital data to render non-visual features into visual features. Chaton discloses: The method of claim 20, wherein said rendering comprises adding digital data to render non-visual features on said substrate into visual features ([0080] defects are highlighted and displayed. The highlighting is understood as adding digital data to render non-visual feature into visual features) in said transcoded machine vision digital data signal ([0080] the highlighted defects are displayed which is understood as being in the transcoded digital data signal). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the adding of digital data taught by Chaton to the invention of Prabuwono. The motivation for doing so would have been that it "allows to identify the location of a defect," (Chaton, [0080]). Therefore, it would have been obvious to combine Chaton with Prabuwono to obtain the invention as specified in claim 28. Regarding claim 29, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 28. Prabuwono does not disclose expressly that the rendering comprises adding visual elements to the features. Chaton discloses: The method of claim 28, wherein said rendering comprises adding visual elements corresponding to said non-visual features ([0080] defects are highlighted and displayed. The highlighting is understood as adding visual elements to the defects which were previously non-visual). Regarding claim 31, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 20. Prabuwono further discloses: The method of claim 20, wherein said rendering comprises tagging feature-identifying data with one or more digital data elements (pg. 3 col. 2 para. 4, the output includes an indication of the result. The indication may be understood as tagging feature-identifying data with one or more digital data elements, e.g. a label or indication, relating to the artifact). Regarding claim 32, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 20. Prabuwono in view of Chaton does not disclose expressly that the rendering comprises combining the data from another acquisition device. Park discloses: The method of claim 20, wherein said rendering comprises combining said media data signal with a further machine vision digital data signal from a further acquisition device ([0214] the acquisition device may be a stereo camera which is comprised of two lenses and combines the incoming data). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the data from another acquisition device as taught by Park. The motivation for doing so would have been that "cameras may be used to capture image data around an entire periphery of vehicle" (Park, [0214]). Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 32. Regarding claim 33, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 20. Prabuwono in view of Chaton does not disclose expressly that the data comprises non-image data. Park discloses: The method of claim 20, wherein the raw media data signal comprises, at least in part, non-image data ([0204] the vision data may be radar which is understood as non-image data). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the non-image data taught by Park with the invention of Prabuwono in view of Chaton. The motivation for doing so would have been "for long-range vehicle detection, even in darkness and/or severe weather conditions" (Park, [0204]). Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 33. Regarding claim 37, Prabuwono in view of Chaton in further view of Park discloses the subject matter of claim 20. Prabuwono in view of Chaton does not disclose expressly that the processor is further configured to send control signals for controlling functionality of said acquisition device. Park discloses: The method of claim 20, wherein said one or more digital data processors are further configured to send control signals for controlling functionality of said media acquisition device ([0150] a control unit is associated with the acquisition device for control of the system). It would have been It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to use the control signal capability of Park in the invention of Prabuwono in view of Chaton. The motivation for doing so would have been "to activate autonomous emergency braking and lane departure warning functions" (Park [0150]). Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 37. Regarding claim 39, Prabuwono discloses: capturing visual information from a substrate (pg. 1 col. 2 para. 1 and Fig. 1, the system captures images of a system of bottles on a conveyor belt which can be considered a substrate) PNG media_image1.png 384 312 media_image1.png Greyscale by: receiving a raw machine vision digital signal from an acquisition device (pg. 2 col. 1 para. 2, the process begins with receiving vision data from an acquisition device) PNG media_image3.png 30 310 media_image3.png Greyscale wherein said raw machine vision digital data signal is in a machine vision acquisition device format (a person of ordinary skill in the art would understand that after an image is acquired it is in the data format of the acquisition device); identifying digital data elements associated with an artifact in said raw machine vision digital data signal (pg. 3 col. 1 para. 1-2, the system detects an incorrect cap placement. As this is an issue in the cap, it is understood as an artifact of the substrate. Identifying the heights of the cap are understood as identifying digital data elements), PNG media_image6.png 160 312 media_image6.png Greyscale wherein said artifact is not detectable by an acquisition device (the examiner is interpreting this limitation as “identifiable” instead of “detectable” per the rejection under 35 U.S.C. 112(b) above. Fig. 2, the acquisition device is a web camera. A person of ordinary skill in the art would recognize a web camera to lack the capability to perform an "identifying" function. Therefore, as the camera does not perform an identifying function then the artifact is not identifiable by the camera); transcoding the raw machine vision digital data signal by rendering the artifact detectable (pg. 3 col. 2 para. 4, the output indicates acceptance or rejection of the cap position, i.e. the artifact, which is understood as making the artifact detectable) in a transcoded machine vision digital data signal (pg. 3 col. 2 para. 4, the indication of the acceptability, i.e. the detectable artifact, is in the output which is understood as the transcoded machine vision digital data signal) by the consumption device (pg. 3 col. 2 para. 4, the user can see the result of the inspection which is understood as the artifact being now detectable on the consumption device); and outputting the transcoded machine vision digital signal to a consumption device (per the applicant specification [0093], the examiner interprets that the consumption device may be a monitor, i.e. a display. pg. 3 col. 2 para. 4, the system generates an output. In describing the output, Prabuwono says "From this, we can see the result of bottle inspection". This shows that the output is sent to a display in order to be seen which is understood as a consumption device), wherein said transcoded machine vision digital data signal is in a consumption device format (pg. 3 col. 2 para. 4, the output is a file consisting of the information of the processing. As this file is "seen" as explained above, the file is understood to be in a consumption device format) PNG media_image6.png 160 312 media_image6.png Greyscale Prabuwono does not disclose expressly a non-transitory computer-readable medium comprising instructions which are executed by a processor and that the artifact is non detectable by the consumption device. Chaton discloses: A non-transitory computer-readable medium ([0107] a non-transitory computer-readable medium) comprising digital instructions ([0107] the memory stores instructions) to be implemented by one or more digital processors ([0108] the processor implements the instructions) for transcoding a machine vision data signal in an existing machine vision system ([0013] the invention performs functions which may be understood as transcoding which provide functionality and may be applied to various existing systems) wherein said artifact is not detectable by said machine vision consumption device ([0106] the system include one or more monitors which is understood as a consumption device. [0080] defects are highlighted and displayed. This shows that the defects were not detectable by the monitor prior to highlighting the defect); It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to make the not detectable artifact visible as taught by Chaton. The motivation for doing so would have been "which defect can be output via display of a mask which allows an observer to easily identify the location of a defect on the material surface" (Chaton, [0080]). Therefore, it would have been obvious to combine Chaton with Prabuwono. Prabuwono in view of Chaton does not disclose expressly to encode and send raw machine vision digital data via ethernet-based communication protocols, and to output the transcoded machine vision digital data signal in a format that is not in an ethernet-based connection protocol. Park discloses: and encoding said raw machine vision digital data signal for communication in an ethernet-based communication protocol ([0567] and Fig. 40A, an acquisition device, in this example an ultrasound, may send data, raw data as a format option, to system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." As the data is sent over ethernet communication, a person or ordinary skill in the art would understand that the data would necessarily be encoded in a way compatible with ethernet communication protocols, otherwise it could not be sent over ethernet); sending said encoded raw machine vision digital data signal in an ethernet-based communication protocol via network connection to said one or more digital processors ([0567] and Fig. 40A, an acquisition device, in this example an ultrasound, may send data, raw data as a format option, to system 3800. [0533]-[0534] and fig. 38, "In at least one embodiment, communication between facilities and components of system 3800 . . . may be communicated over a data bus or data busses, wireless data protocols (Wi-Fi), wired data protocols (e.g., Ethernet), etc." Therefore, the data may be sent by ethernet from the acquisition device. Fig. 38, system 3800 include processors 3822 so therefore, the communication is to at least one or more processors); wherein said transcoded machine vision digital data signal is in a consumption device format that is not in an ethernet-based connection protocol (Transcoded data is understood as data with features marked for recognition, i.e. rendering them detectable, see applicant's specification [0044]. Park, [0569] and Fig. 40A, after the detection, i.e. the transcoded data with feature marked, the data is output for visualization 4012. [0566], the system 3800 may utilize services locally. Therefore, the transfer of the data to the visualization may be local which would not be in an ethernet-based connection protocol format). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to combine the ethernet-based communications of Park with the invention of Prabuwono in view of Chaton. The motivation for doing so would have been "In at least one embodiment, on-premise installation may allow for high-bandwidth uses (via, for example, higher throughput local communication interfaces, such as RF over Ethernet) for real-time processing" (Park, [0563]). Thus, ethernet communication may allow for increased ability to perform real-time processing. Further, the use of the ethernet-communication represents a simple substitution of one known element, digital communication of Prabuwono in view of Chaton, for another known element, ethernet communication of Park, to yield predictable results, efficient communication between components of the system. Therefore, it would have been obvious to combine Park with Prabuwono in view of Chaton to obtain the invention as specified in claim 39. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Epshteyn, US 20060219789 A1, discloses a system for decoding dataforms between an acquisition and a display to easily add dataform decoding operations to hardware devices. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA B CROCKETT whose telephone number is (571)270-7989. The examiner can normally be reached Monday-Thursday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA B. CROCKETT/Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661 /JOHN VILLECCO/Supervisory Patent Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

May 04, 2023
Application Filed
Aug 22, 2025
Non-Final Rejection — §103, §112
Jan 27, 2026
Response Filed
Mar 24, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592060
ARTIFICIAL INTELLIGENCE DEVICE AND 3D AGENCY GENERATING METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12587704
VIDEO DATA TRANSMISSION AND RECEPTION METHOD USING HIGH-SPEED INTERFACE, AND APPARATUS THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12567150
EDITING PRESEGMENTED IMAGES AND VOLUMES USING DEEP LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12561839
SYSTEMS AND METHODS FOR CALIBRATING IMAGE SENSORS OF A VEHICLE
2y 5m to grant Granted Feb 24, 2026
Patent 12529639
METHOD FOR ESTIMATING HYDROCARBON SATURATION OF A ROCK
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+27.5%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month