Prosecution Insights
Last updated: April 19, 2026
Application No. 18/305,936

DATA PROCESSING APPARATUS AND METHOD THEREOF

Final Rejection §103
Filed
Apr 24, 2023
Examiner
SEYE, ABDOU K
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
480 granted / 583 resolved
+27.3% vs TC avg
Strong +28% interview lift
Without
With
+27.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
38 currently pending
Career history
621
Total Applications
across all art units

Statute-Specific Performance

§101
21.6%
-18.4% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 583 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Final Office Action is in response to the applicant’s remarks and arguments filed on November 19, 2025. Claims 1-2, 4, 7 and 10-13 were amended. Claims 1-13 remain pending in the application. Claims 1-13 are being considered on the merits. Response to Arguments Rejections under 35 U.S.C. § 101 In view of the amendment and applicant’s remarks, 101 rejection is withdrawn. Applicant’s argument regard the §101 is found to be persuasive. Accordingly, the rejection has been withdrawn. Claim Rejections - 35 USC § 103 Applicant argues on pages 6-7 that: “The distinguishable feature of the claimed invention (i.e., claim 1) includes, "store, in the one or more memories comprising a fixed-size ring buffer……, and a parameter indicating a target processing node to transition to if processing at the processing node is not performable". “Yamamoto and Tao, taken alone or in any combination, do not disclose, suggest, or render obvious the distinguishable feature of the claimed invention as described above.”. Examiner respectfully disagree and submit that: Applicant’s arguments with respect to the newly added limitations have been considered but are moot because the arguments do not apply to the newly cited reference Govindan Ravindran et al. “A Performance Comparison of Hierarchical Ring- and Mesh-connected Multiprocessor Networks”, 1997 IEEE being used in the current rejection. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claims 1, 4 and 13 are objected to because of the following informalities: As to claim 1, the amendment of the claim 1, the terms “…comprising a fixed-size ring buffer” in lines 7, and “ a parameter indicating a target processing node to transition to if processing at the processing node is not performable” in lines 11-12 should have been underlined. The text of any added subject matter must be shown by underlining the added text (see MPEP 1.21(c). As to claim 4, the amendment term “… the target processing node to transition to indicated by the parameter, as the processing node to perform the processing” in lines 5-6 should have been underlined. The text of any added subject matter must be shown by underlining the added text (see MPEP 1.21(c). As to claim 13, the amendment of claim 13 , the terms “ one or more memories comprising a fixed-size ring buffer” in line 4, and “a parameter indicating a target processing node to transition to if processing at the processing node is not performable” in lines 8-9 should have been underlined. The text of any added subject matter must be shown by underlining the added text (see MPEP 1.21(c). Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Yamamoto et al. (US 2011/0239224, Yamamoto hereinafter) in view of Yiting Tao et al. “Unsupervised-Restricted Deconvolutional Neural Network for Very High Resolution Remote-Sensing Image Classification” IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 55, NO. 12, DECEMBER 2017, Tao hereinafter) and Govindan Ravindran et al. “A Performance Comparison of Hierarchical Ring- and Mesh-connected Multiprocessor Networks”, 1997 IEEE , Govindan hereinafter) . As to claim 1, Yamamoto teaches a data processing apparatus comprising: one or more circuitry configured to : (e.g., see Figs 1 and 2, para [0052] “A CNN processing unit 63 is a feature detection processing unit including a hierarchical calculation processing apparatus. Details of the CNN processing unit 63 will be described later with reference to FIG. 2.) , are configured to: sequentially perform processing of data by a plurality of hierarchically connected processing nodes (e.g., para [0058] In FIG. 2, a sequence control unit 100 outputs sequence instruction information to a unit calculation execution unit 101 in accordance with calculation order information set in advance. In this embodiment, as described above, the hierarchical calculation processing apparatus executes calculations specified in the respective processing nodes in a time-sharing fashion. Therefore, the sequence control unit 100 controls the order of calculations specified in the respective processing nodes by the unit calculation execution unit 101 [0062] “a series of calculations are executed for the entire input image (entire input data). “.") ; store, in the one or more memories, processing results of the plurality of respective processing nodes, and processing statuses ( para [0031] a determination step of determining, based on storage states of calculation results in partial areas of the memory assigned to the processing node designated in the designation step and to processing nodes connected to a previous stage of the designated processing node, whether or not to execute a calculation of the designated processing node. Thus, “storage states of calculation results” include the processing statuses) of and parameters (e.g., para [0075] The network composition management unit 102 manages information that specifies the network composition of the hierarchical calculations to be calculated by the hierarchical calculation processing apparatus of this embodiment. The network composition means the connection relationship among processing nodes, the convolution kernel size used in the calculation processing used in each processing node, and the like. [0076] The address calculation parameter storage table 107 records the network composition information managed by the network composition management unit 102, and address management information required for read and write accesses to the memory 104 that occur upon execution of calculations. The address calculation parameter storage table 107 stores various kinds of information for respective processing nodes Thus, the “information”, “a position where data is to be read and that where data” , “The address calculation parameter storage table 107 stores various kinds of information for respective processing nodes “ include the parameters) for the plurality of respective processing nodes, the parameters being used to determine a processing node to perform the processing (e.g., para [[0075] The network composition management unit 102 manages information that specifies”, [0076] The address calculation parameter storage table 107 records the network composition information managed by the network composition management unit 102, and address management information required for read and write accesses to the memory 104 that occur upon execution of calculations. The address calculation parameter storage table 107 stores various kinds of information for respective processing nodes.) ; cyclically specify processing nodes, from among the plurality of processing nodes, to perform the processing in an orderto cyclically execute all the processing nodes which compose the hierarchical calculation network. For example, upon execution of the CNN shown in FIG. 3 by the hierarchical calculation processing apparatus of this embodiment, the sequence control unit 100 instructs the unit calculation execution unit 101 to cyclically execute the respective processing nodes like [0058] In FIG. 2, a sequence control unit 100 outputs sequence instruction information to a unit calculation execution unit 101 in accordance with calculation order information set in advance. In this embodiment, as described above, the hierarchical calculation processing apparatus executes calculations specified in the respective processing nodes in a time-sharing fashion. Therefore, the sequence control unit 100 controls the order of calculations specified in the respective processing nodes by the unit calculation execution unit 101.) determine whether the processing by a specified processing node is performable based on the stored processing statuses (e.g., para [0031] a determination step of determining, based on storage states of calculation results in partial areas of the memory assigned to the processing node designated in the designation step and to processing nodes connected to a previous stage of the designated processing node, whether or not to execute a calculation of the designated processing node; and [0032] an execution step of controlling, when it is determined in the determination step that the calculation is executed, to execute calculation processing corresponding to the designated processing node.); and determine a processing node to perform the processing based on a result of determination and the stored parameter for the specified processing node (e.g., [0130] Furthermore, upon reception of a unit calculation start instruction from the unit calculation execution determination unit 105 (details of notification will be described later), the network composition management unit 102 outputs address calculation parameters to the ring buffer management unit 103 to give the instruction to calculate addresses. The address calculation parameters to be output to the ring buffer management unit 103..). However, Yamamoto does not teach the one or more memories comprising a fixed-size ring buffer, and a parameter indicating a target processing node to transition to if processing at the processing node is not performable, to perform the processing in an order based on hierarchy. Tao teaches to perform the processing (e.g., “data preprocessing” in Fig. 4. “data preprocessing, SCAE training (parameter initialization), and URDNN”) in an order based on hierarchy (e.g., see page 6806, “Deconvolution can be regarded as the inverse process of convolution that helps transform the feature map from convolutional layers to the original input size. It conducts hierarchical feature extraction, ranging from overall features to class-specific features.”, page 6809, “The SAE is a model that stacks the AE’s input and hidden layers in a sequence [48]. An AE model usually contains an input layer, a hidden layer, and an output layer. The relationship between the input layer and the hidden layer is called the “encoder,” whereas that between the hidden and the output layers is called the “decoder.” The decoder generates data of the same size as the input data, which can be regarded as the process of input reconstruction” for “A. Data Preprocessing” . see FIG.6, page 6811, . “hierarchical features are extracted after multiple convolutional building blocks. Each block consists of a convolution and nonlinear transformation ReLUs, and may be equipped with pooling layers“ . According to applicant’s specification in para <Control of Layer Processing Order> [0045] Fig. 6 illustrates an example of parameter assignment to the layers and the resulting control of layer processing order in processing the network of Fig. 4 by the data processing apparatus and method according to the present exemplary embodiment. In the first exemplary embodiment, which layer to transition to and perform (resume) next processing at when input layer-side feature data is determined to be insufficient at each layer is assigned in the form of parameters. For example, if a line of the second layer 403 is determined to be incalculable, the data processing apparatus returns to the first layer 401 and resumes processing based on the value "1" of a parameter 602 for the second layer 403. If a line of the third layer 404 is determined to be incalculable, the data processing apparatus returns to the second layer 403 and resumes processing based on the value "2" of a parameter 603 for the third layer 404. Although the lines of the first layer 401 will never be incalculable, a parameter 601 having a value of "1" indicating the first layer 401 is set for the first layer 401 for the sake of convenience. Therefore, the “layers in a sequence” coupled with “input and hidden layers in a sequence” . “convolutional layers “, “ conducts hierarchical feature extraction, ranging from overall features to class-specific features” for processing “ an image” include to perform the processing in an order based on hierarchy). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Yamamoto by adopting the teachings of Tao to allow “ efficient and effective processing and the interpretation of VHR satellite images” (see Tao, Abstract) . Govindan teaches one or more memories comprising a fixed-size ring buffer (e.g., page 58, “the fixed bandwidth of rings independent of ring size” for “hierarchical rings” in page 59, see Figures 1 and 2 ) , and a parameter indicating a target processing node to transition to if processing at the processing node is not performable (e.g., see page 59 and 60 “a request packet to be sent to the target memory”, “The switching technique used determines how packets are forwarded through the network”, “When a packet cannot move forward because the next link is busy, it is blocked in place “ , “The NIC examines the header of a packet and switches 1) incoming packets from the ring to a PM” for “Parameter R E (0, l), the size of the memory access Region” , “2.3 Simulator” in page 61. According to applicant’s specification in para [0131] In such a case, the memory control unit 102 refers to the transition destination control parameters 108 in the memory 105, and reserves the storage areas for the ring buffers of the respective nodes. Like other information as the transition destination control parameters 108, the sizes of the ring buffers of the respective nodes are calculated in advance and held in the memory 105. < Node-by-Node Switching of Ring Buffer Use>. Thus, a parameter indicating a target processing node to transition to if processing at the processing node is not performable) ; cyclically specify processing nodes, from among the plurality of processing nodes, to perform the processing in an order based on hierarchy ( e.g., e.g., see page 61, wherein “2.3 Simulator”, “the network clock cycle is the same as the PM clock cycle”, “hierarchical rings” and “2.1 Hierarchical ring system description” for “a hierarchical ring, there are two types of network nodes” The NIC examines the header of a packet and switches 1) incoming packets from the ring to a PM, 2) outgoing packets from the PM to the ring, and 3) continuing packets from the input link to the output link. The IRI controls the traffic between two rings and is modelled as a 2 x 2 crossbar switch”, “all communication occurs synchronously; that is, within a clock cycle”, “network cycle” in pages 59 and page 60, see Figure 3 and figure 4. Thus, cyclically specify processing nodes, from among the plurality of processing nodes, to perform the processing in an order based on hierarchy. Thus, cyclically specify processing nodes, from among the plurality of processing nodes, to perform the processing in an order based on hierarchy). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the method of Yamamoto and Tao by adopting the teachings of Govindan to have Hierarchical rings that “can be clocked at faster rates “ , “allow addition and removal of processing nodes at arbitrary locations “, “allows natural exploitation in the spatial locality of application memory access patterns” , “allows efficient implementation of broadcasts” (see Govindan, Abstract) . As to claim 2, Yamamoto teaches wherein the one or more circuitry include a plurality of storage areas configured to hold the processing results from the plurality of respective processing nodes (e.g., see Abstract “A calculation processing apparatus, which executes calculation processing based on a network composed by hierarchically connecting a plurality of processing nodes, assigns a partial area of a memory to each of the plurality of processing nodes, stores a calculation result of a processing node in a storable area of the partial area assigned to that processing node, and sets, as storable areas, areas that store the calculation results whose reference by all processing nodes connected to the subsequent stage of that processing node is complete”), a processing result in each of the plurality of storage areas being overwritten with another processing result from a same respective processing node (e.g., para [0074] As described above, in the memory 104, the partial areas assigned to respective processing nodes are used as ring buffers. The (logical) width of each ring buffer at that time is the same as that of the input image. The ring buffer is cyclically overwritten and used for respective lines each having a height "1". Therefore, one line of the ring buffer is updated every time the unit calculation is made” and “an upper layer processing node immediately executes the unit calculation, and the calculation result which was used in that unit calculation and is no longer required is discarded (an area which stores that calculation result is defined as an overwritable area, that is, an area which can store a new calculation result). The first embodiment realizes the effective use of the memory by such memory control” in para 185) . As to claim 3, Yamamoto teaches wherein the parameters are determined based on a structure of the plurality of processing nodes and stored in advance (e.g., para [0076] The address calculation parameter storage table 107 records the network composition information managed by the network composition management unit 102, and address management information required for read and write accesses to the memory 104 that occur upon execution of calculations. The address calculation parameter storage table 107 stores various kinds of information for respective processing nodes.). As to claim 4, Yamamoto does not teach wherein the one or more processors, are further configured to, with the specified processing node determined not to performable, determine the target processing node to transition to indicated by the parameter, as the processing node to perform the processing. Govindan teaches with the specified processing node determined not to performable, determine the target processing node to transition to indicated by the parameter, as the processing node to perform the processing (see rejection of claim1 above). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify the method of Yamamoto and Tao by adopting the teachings of Govindan to have Hierarchical rings that “can be clocked at faster rates “ , “allow addition and removal of processing nodes at arbitrary locations “, “allows natural exploitation in the spatial locality of application memory access patterns” , “allows efficient implementation of broadcasts” (see Govindan, Abstract) . As to claim 5, Yamamoto teaches wherein the parameters are determined based on a structure of the plurality of processing nodes, sizes of data to be processed by the plurality of respective processing nodes, and sizes of data generated as the processing results (e.g., para [0068] As can be seen from FIG. 4A, upon execution of the unit calculation, an area 605 having a horizontal size which is at least equal to the calculation target image and a vertical size which is equal to that of the convolution kernel is required as a required area of the calculation target image. That is, data of this area 605 serve as processing target data of the unit calculation by the processing node. For the sake of simplicity, this area 605 will be referred to as a unit calculation target image area hereinafter. The convolution calculations can be made for the entire area of the calculation target image 601 by executing the unit calculation indicated by the result area 604 while shifting the unit calculation target area 605. Note that FIG. 4B shows a case in which the unit calculation is made for an image area 610 as a unit calculation target when the unit calculation target image area is shifted for one pixel (for one horizontal line) from the state in FIG. 4A. A result area 611 is also shifted for one pixel down from the result area 604. At this time, whether or not to execute a certain unit calculation depends on whether or not pixel data of an image area as a unit calculation target of that unit calculation have been calculated by the processing node of the previous layer, and that result is output.). As to claim 6, Yamamoto teaches further wherein the processing by the plurality of processing nodes includes convolution processing (e.g., para 68 “The convolution calculations can be made for the entire area of the calculation target image 601 by executing the unit calculation indicated by the result area 604 while shifting the unit calculation target area 605. “ . However, Yamamoto does not teach deconvolution processing. Tao teaches wherein the processing by the plurality of processing nodes includes convolution processing and deconvolution processing (e.g., see page 6806, “transformed the feature maps from convolutional layers to original input size through deconvolution. “).Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Yamamoto by adopting the teachings of Tao to allow “ efficient and effective processing and the interpretation of VHR satellite images” (see Tao, Abstract) . As to claim 7, Yamamoto teaches, wherein the parameters include a value about each processing node of the plurality of processing nodes, the value indicating a number of times for the processing node continuously performs the processing, and wherein the one or more circuitries, are further configured to, with the specified processing node determined to perform the processing (e.g., see FIG. 7, [0160] That is, the storage amounts of a certain processing node exist as many as the number of adjacent upper layer processing nodes of that processing node, and increase or decrease as follows. [0161] If that processing node executes the unit calculation, the storage amounts corresponding to all the adjacent upper layer processing nodes increase by one line. [0162] If a certain adjacent upper layer processing node of that processing node executes the unit calculation, the storage amount corresponding to that adjacent upper layer processing node decreases by one line. [0163] The storage amount calculation unit 109 calculates storage amounts upon making the unit calculation target image area examination (steps S101 to S111) and upon making the unit calculation result write area examination (steps S201 to S211). In either case, the storage amount is calculated based on the read counter value, write counter value, and the number of storable lines sent from the network composition management unit 102. However, as described above, the read counter value used in the unit calculation target image area examination is that associated with the designated processing node for the adjacent lower layer processing node. Also, the write counter value used in the unit calculation target image area examination is that when the designated processing node is defined as a target processing node. On the other hand, the read counter value used in the unit calculation result write area examination is that when the adjacent upper layer processing node is defined as a target processing node, and the designated processing node is defined as the adjacent lower layer processing node. Also, the write counter value used in the unit calculation result write area examination is that of the designated processing node.) in a case where a number of times the processing is continuously performed by the specified processing node is less than the number of times indicated by the parameter , determine the specified processing node to perform the processing (e.g., see FIG. 7 0164] The storage amount calculation processing by the storage amount calculation unit 109 (steps S102 to S109, steps S202 to S209) will be described in detail below. Upon starting the storage amount calculation (step S102, step S202), the storage amount calculation unit 109 compares the read counter value and write counter value (step S103, step S203). If the write counter value is larger, a value obtained by subtracting the read counter value from the write counter value is defined as a storage amount (steps S104 and S105, steps S204 and S205). On the other hand, if the write counter value is smaller, a value obtained by adding the number of storable lines to the write counter value, and then subtracting the read counter value from that sum is defined as a storage amount (steps S104 and S106, steps S204 and S206)..). As to claim 8, Yamamoto teaches wherein the value indicating the number of times for each processing node to continuously perform the processing is set based on a size of a unit of processing of data for the processing node to process (e.g., para [0165] If the write counter value is equal to the read counter value, either the storage amount is zero or the ring buffer is full of data, but these cases are indistinguishable from the write counter value and read counter value. Hence, which of a corresponding write counter and read counter counts last is managed. With this information, when the write counter value is equal to the read counter value, and the write counter counts last, it is determined that the write counter value reaches the read counter value. On the other hand, when the read counter counts last, it is determined that the read counter value reaches the write counter value. Then, the storage amount is calculated by distinguishing whether [0166] the write counter value and read counter value are equal to each other since the write counter value reaches the read counter value (in this case, the ring buffer is full of data) (steps S103, S107, and S106, steps S203, S207, and S206), or [0167] the write counter value and read counter value are equal to each other since the read counter value reaches the write counter value (in this case, the storage amount of the ring buffer is zero) (steps S103, S107, and S108, steps S203, S207, and S208).. As to claim 9, Yamamoto teaches wherein processing by at least one processing node of the plurality of processing nodes includes processing that refers to processing results from two or more other processing nodes (e.g., para [0168] In this way, a predetermined amount is added to the storage amount when the calculation result of the calculation processing of the corresponding processing node is written in a partial area of the memory. On the other hand, a predetermined amount is subtracted from the storage amount when the calculation processing of a processing node connected to the subsequent stage of the corresponding processing node is completed.). As to claim 10. Yamamoto teaches wherein the parameters include sizes of areas to hold the processing results of the plurality of respective processing nodes, and wherein the one or more circuitries, are further configured to set the areas to hold the processing results of the plurality of respective processing nodes in the one or more memories based on the sizes (e.g., para [0068] As can be seen from FIG. 4A, upon execution of the unit calculation, an area 605 having a horizontal size which is at least equal to the calculation target image and a vertical size which is equal to that of the convolution kernel is required as a required area of the calculation target image. That is, data of this area 605 serve as processing target data of the unit calculation by the processing node. For the sake of simplicity, this area 605 will be referred to as a unit calculation target image area hereinafter. The convolution calculations can be made for the entire area of the calculation target image 601 by executing the unit calculation indicated by the result area 604 while shifting the unit calculation target area 605. Note that FIG. 4B shows a case in which the unit calculation is made for an image area 610 as a unit calculation target when the unit calculation target image area is shifted for one pixel (for one horizontal line) from the state in FIG. 4A. A result area 611 is also shifted for one pixel down from the result area 604. At this time, whether or not to execute a certain unit calculation depends on whether or not pixel data of an image area as a unit calculation target of that unit calculation have been calculated by the processing node of the previous layer, and that result is output.) As to claim 11, Yamamoto teaches wherein the parameters include a value indicating a processing node to store all processing results, and wherein the one or more circuitries, are further configured to refer to the parameters and control whether to store all or part of the processing result of each of the plurality of processing nodes in the one or more memories (e.g., para [0072] (2) When the unit calculation execution unit 101 receives the sequence instruction information from the sequence control unit 100, a unit calculation execution determination unit 105 determines whether or not the instruction unit calculation can be executed. Note that the operation and determination of this unit calculation execution determination unit 105 will be described later, and the unit 105 uses information indicating whether or not pixel data of an image area as a target of that unit calculation are available as one criterion. When it is determined that the unit calculation can be executed, the unit calculation execution unit 101 executes the calculation specified in the processing node instructed by the instruction information for the unit calculation (for example, for one row in the horizontal direction). Upon completion of the unit calculation, the unit 101 notifies the sequence control unit 100 of completion of the unit calculation. When it is determined that the unit calculation cannot be executed, the unit calculation execution unit 101 skips the corresponding unit calculation, and notifies the sequence control unit 100 of completion of the unit calculation) . As to claim 12, Yamamoto teaches wherein the one or more circuitries, are further configured to determine whether the processing by the processing node is performable based on whether the one or more circuitries have an area available to store the processing result of the specified processing node (e.g., para [0075] The network composition management unit 102 manages information that specifies the network composition of the hierarchical calculations to be calculated by the hierarchical calculation processing apparatus of this embodiment. The network composition means the connection relationship among processing nodes, the convolution kernel size used in the calculation processing used in each processing node, and the like. [0076] The address calculation parameter storage table 107 records the network composition information managed by the network composition management unit 102, and address management information required for read and write accesses to the memory 104 that occur upon execution of calculations. The address calculation parameter storage table 107 stores various kinds of information for respective processing nodes). As to claim 13, see rejection of claim 1 above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDOU K SEYE whose telephone number is (571)270-1062. The examiner can normally be reached M-F 9-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached at 5712724215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDOU K SEYE/Examiner, Art Unit 2198 /PIERRE VITAL/Supervisory Patent Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Apr 24, 2023
Application Filed
Aug 19, 2025
Non-Final Rejection — §103
Nov 21, 2025
Response Filed
Feb 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598527
Real-Time Any-G SON
2y 5m to grant Granted Apr 07, 2026
Patent 12587456
MACHINE LEARNING BASED EVENT MONITORING
2y 5m to grant Granted Mar 24, 2026
Patent 12585512
CUSTOMIZED SOCKET APPLICATION PROGRAMMING INTERFACE FUNCTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12541410
THREAD SPECIALIZATION FOR COLLABORATIVE DATA TRANSFER AND COMPUTATION
2y 5m to grant Granted Feb 03, 2026
Patent 12530245
CONTAINER IMAGE TOOLING STORAGE MIGRATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+27.5%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 583 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month