CTNF 18/239,764 CTNF 84379 Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 101 07-04-01 AIA 07-04 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step One The claims are directed to a method (claims 1 - 7) and a non-transitory machine-readable medium (claims 8 - 14). Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). As to claims 8, Step 2A, Prong One The claim recites in part: utilizing a neural architecture search (NAS) method to obtain a searched result, wherein the searched result comprises a plurality of sub-networks; combining the plurality of sub-networks to generate a combined neural network; and fine-tuning the combined neural network to generate the dynamic neural network. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can mentally evaluate different possible network structures, select multiple sub-networks, combine the selected sub-networks into a larger architecture, and mentally refine the architecture based on expected performance to produce an improved neural network design. Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea. Step 2A, Prong Two The claim further recites a non-transitory machine-readable medium and a processor which are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2A, Prong Two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B The claim further recites a non-transitory machine-readable medium and a processor which are recited at a high-level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2B the additional elements individually or in combination do not amount to significantly more than the judicial exception. As to claims 9, Step 2A, Prong One The claim recites in part: the search result is a pareto-front result As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can obtain a Pareto-front result by mentally comparing multiple options across different criteria and selecting the non-dominated options that are not worse than others in all criteria. Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception As to claims 10, Step 2A, Prong One The claim recites in part: the dynamic neural network is a supernet with a model weight, and the model weight is shared between the plurality of sub-networks included in the dynamic neural network. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can mentally design a supernet by selecting multiple sub-networks and deciding that they share the same model weights. Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception As to claims 11, Step 2A, Prong One The claim recites in part: the step of combining the plurality of sub-networks to generate the combined neural network comprises: for each convolution layer of the combined neural network, selecting a maximum kernel size of a convolution layer among multiple corresponding convolution layers of the plurality of sub-networks as a kernel size of said each convolution layer of the combined neural network. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can mentally compare kernel sizes from multiple convolution layers and select the largest one for a combined network layer. Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception As to claims 12, Step 2A, Prong One The claim recites in part: the step of combining the plurality of sub-networks to generate the combined neural network comprises: for each convolution layer of the combined neural network, selecting a maximum number of channels of a convolution layer among multiple corresponding convolution layers of the plurality of sub-networks as a number of channels of said each convolution layer of the combined neural network. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can mentally compare channel counts of multiple layers and select the largest one for the combined network layer. Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception As to claims 13, Step 2A, Prong One The claim recites in part: each of the plurality of sub-networks has a DNA sequence, and the DNA sequence records a model architecture of said each of the plurality of sub-networks. As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human can mentally assign a code (DNA sequence) to represent each sub-network architecture. Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea. Step 2A, Prong Two The claim does not include additional elements that integrate the judicial exception into a practical application or amount to significantly more than the judicial exception itself Step 2B The claim does not include additional elements that are sufficient to amount to “significantly more” to the judicial exception As to claims 14, Step 2A, Prong One The claim recites in part: the step of fine-tuning the combined neural network to generate the dynamic neural network comprises: randomly sampling at least one candidate sub-network from the searched result; As drafted and under its broadest reasonable interpretation, these limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, a human randomly select (sample) at least one candidate sub-network from the searched result; Accordingly, at Step 2A, Prong One, the claim is directed to an abstract idea. Step 2A, Prong Two The claim further recites: training the at least one candidate sub-network for updating a model weight of the combined neural network until a quality of the combined neural network reaches a predetermined quality, to generate at least one trained which is recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, at Step 2A, Prong Two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B In accordance with Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more that the judicial exception. As discussed above, the additional elements of: training the at least one candidate sub-network for updating a model weight of the combined neural network until a quality of the combined neural network reaches a predetermined quality, to generate at least one trained which is recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, at Step 2B the additional elements individually or in combination do not amount to significantly more than the judicial exception. Claim 1 has similar limitations as claim 8. Therefore, the claim is rejected for the same reasons as above. Claim 2 has similar limitations as claim 9. Therefore, the claim is rejected for the same reasons as above. Claim 3 has similar limitations as claim 10. Therefore, the claim is rejected for the same reasons as above. Claim 4 has similar limitations as claim 11. Therefore, the claim is rejected for the same reasons as above. Claim 5 has similar limitations as claim 12. Therefore, the claim is rejected for the same reasons as above. Claim 6 has similar limitations as claim 13. Therefore, the claim is rejected for the same reasons as above. Claim 7 has similar limitations as claim 14. Therefore, the claim is rejected for the same reasons as above. Claim Rejections - 35 USC § 102 07-07-aia AIA 07-07 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – 07-12-aia AIA (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 07-15-03-aia AIA Claim(s) 1, 3 - 5, 7. 8, 10 - 12, and 14 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bhardwaj et al (US 2023/0376745) . As to claim 8, Bhardwaj et al figure 1 - 4 shows and teaches a non-transitory machine-readable medium for storing a program code, wherein when loaded and executed by a processor ( paragraph [0082]…various embodiments described herein are implemented using dedicated hardware, configurable hardware or programmed processors executing programming instructions that are broadly described in flow chart form that can be stored on any suitable electronic storage medium or transmitted over any suitable electronic communication medium ), the program code instructs the processor to perform a method for generating a dynamic neural network ( paragraph [0019]… automated design of neural network architecture or topology. In particular, mechanisms are disclosed for automated selection of a sub-network from a super-net containing multiple candidate sub-networks ; paragraph [0082]…various embodiments described herein are implemented using dedicated hardware, configurable hardware or programmed processors executing programming instructions that are broadly described in flow chart form that can be stored on any suitable electronic storage medium or transmitted over any suitable electronic communication medium ) ( Examiner’s Note: “automated design of neural network architecture” reads on “a dynamic neural network” ; The applicant states in paragraph [0002] of the current application that a dynamic neural network is one that can dynamically adapt network efficiency in real-time, therefore, Bhardwaj et al teaches a dynamic neural network ). utilizing a neural architecture search (NAS) method to obtain a searched result, wherein the searched result comprises a plurality of sub-networks ( paragraph [0025]… [0025] FIG. 1 is a simplified block diagram of a data processor 100 for Neural Architecture Search (NAS). A super-net 102 is a neural network that includes a number of sub-networks 104 ; paragraph [0029]… [0029] One approach for Neural Architecture Search (NAS) is weight-sharing NAS, discussed above. In this approach, the performances of different combinations of sub-networks are compared to select a final sub-network ) ( Examiner’s Note: “the performances of different combinations of sub-networks are compared to select a final sub-network” reads on “obtain a searched result” ; “super net 102 reads on “a plurality of sub-networks” ); combining the plurality of sub-networks to generate a combined neural network ( paragraph [0025]… A super-net 102 is a neural network that includes a number of sub-networks 104. In the simplified example shown, each sub-network 104 corresponds to selected path from node A to node C. The path segment from node A to node B has three options (designated by operational blocks OPT 1, OPT 2 and OPT 3), each corresponding to a selectable operation on the input data; paragraph [0029]… One approach for Neural Architecture Search (NAS) is weight-sharing NAS, discussed above. In this approach, the performances of different combinations of sub-networks are compared to select a final sub-network ) ( Examiner’s Note: “the performances of different combinations of sub-networks are compared to select a final sub-network” reads on “combining the plurality of sub-networks to generate a combined neural network” ); and fine-tuning the combined neural network to generate the dynamic neural network ( paragraph [0028]…Output 112 is compared to corresponding desired training output 116 in supervised learning controller 114. Network weights, W, of network 102 are adjusted by an amount δW (118), to reduce a cost function computed from a difference between training output 116 and network output 112 ) ( Examiner’s Note: “network weights, W, of network 102 are adjusted by an amount δW (118), to reduce a cost function” reads on “fine-tuning the combined neural network” ). As to claim 10, Bhardwaj et al figure 1 - 4 shows and teaches the non-transitory machine-readable medium, wherein the dynamic neural network is a supernet with a model weight, and the model weight is shared between the plurality of sub-networks included in the dynamic neural network ( paragraph [0025]…FIG. 1 is a simplified block diagram of a data processor 100 for Neural Architecture Search (NAS). A super-net 102 is a neural network that includes a number of sub-networks 104) ; paragraph [0029]…One approach for Neural Architecture Search (NAS) is weight-sharing NAS, discussed above. In this approach, the performances of different combinations of sub-networks are compared to select a final sub-network ) ( Examiner’s Note: “weight-sharing NAS” reads on “the model weight is shared between the plurality of sub-networks included in the dynamic neural network” ). As to claim 11, Bhardwaj et al figure 1 - 4 shows and teaches the non-transitory machine-readable medium, wherein the step of combining the plurality of sub-networks to generate the combined neural network comprises: for each convolution layer of the combined neural network, selecting a maximum kernel size of a convolution layer among multiple corresponding convolution layers of the plurality of sub-networks as a kernel size of said each convolution layer of the combined neural network. ( Paragraph [0038]…a block 400 may be configured to select a kernel size and number of channels for a layer. This may be achieved using a super-kernel, which results in a very efficient NAS method. The concept of super-kernel is depicted in FIG. 5 ; paragraph [0039]…FIG. 5 shows an architecture block 500, with index i, having four options. Each option is a convolutional neural network (CNN). Option 502 is a super-kernel that contains the largest kernel (5×5 in this example) and the maximum possible number of channels (256 in this example). Appropriate α-parameters are used to select various options within this kernel. In the example shown, the 5×5 kernel contains a 3×3 kernel, and the 256 channels include a sub-set of 128 channels. A direct advantage of using a single super-kernel is that it saves significant computation and memory when training the super-net. This results in significant time savings during the search process ) ( Examiner’s Note: “Option 502 is a super-kernel that contains the largest kernel (5×5 in this example) and the maximum possible number of channels” reads on “selecting a maximum kernel size of a convolution layer among multiple corresponding convolution layers” ; “each option is a convolutional neural network (CNN))” reads on “each convolution layer of the combined neural network” ). As to claim 12, Bhardwaj et al figure 1 - 4 shows and teaches the non-transitory machine-readable medium, wherein the step of combining the plurality of sub-networks to generate the combined neural network comprises: for each convolution layer of the combined neural network, selecting a maximum number of channels of a convolution layer among multiple corresponding convolution layers of the plurality of sub-networks as a number of channels of said each convolution layer of the combined neural network ( Paragraph [0038]…a block 400 may be configured to select a kernel size and number of channels for a layer. This may be achieved using a super-kernel, which results in a very efficient NAS method. The concept of super-kernel is depicted in FIG. 5 ; paragraph [0039]…FIG. 5 shows an architecture block 500, with index i, having four options. Each option is a convolutional neural network (CNN). Option 502 is a super-kernel that contains the largest kernel (5×5 in this example) and the maximum possible number of channels (256 in this example). Appropriate α-parameters are used to select various options within this kernel. In the example shown, the 5×5 kernel contains a 3×3 kernel, and the 256 channels include a sub- set of 128 channels. A direct advantage of using a single super-kernel is that it saves significant computation and memory when training the super-net. This results in significant time savings during the search process ) ( Examiner’s Note: “Option 502 is a super-kernel that contains the largest kernel (5×5 in this example) and the maximum possible number of channels (256 in this example)” reads on “selecting a maximum number of channels of a convolution layer among multiple corresponding convolution layer” ; “each option is a convolutional neural network (CNN))” reads on “each convolution layer of the combined neural network” ). As to claim 14, Bhardwaj et al figure 1 - 4 shows and teaches the non-transitory machine-readable medium, wherein the step of fine-tuning the combined neural network to generate the dynamic neural network comprises: randomly sampling at least one candidate sub-network from the searched result ( paragraph [0050]…Random Gaussian Vector ); training the at least one candidate sub-network for updating a model weight of the combined neural network until a quality of the combined neural network reaches a predetermined quality, to generate at least one trained result; and obtain the dynamic neural network according to the at least one trained result. ( paragraph [0030]… Another approach is differentiable NAS (DNAS). DNAS also uses a super-net containing all possible sub-networks. However, the search space is relaxed to be continuous. This enables the architecture to be optimized by gradient descent. A super-net containing all possible sub-networks is trained jointly with architecture parameters (α-parameters). The super-net includes paths with selectable operations such as convolution, max-pooling, average pooling, etc. The architecture parameters represent the importance, or probability, of different architecture choices at various locations inside a super-net. Training a regular deep network involves updating weight parameters using an optimization algorithm, such as stochastic gradient descent (SGD). DNAS not only updates the actual weights of operations, but also the architecture parameters. In FIG. 1, the change to the architecture parameters is shown as vector δα (120). Hence, weights and architecture parameters are trained jointly. After training, the sub-network corresponding to maximum architectural parameter values are selected. The selected sub-network, output at 122, describes the final network architecture for the chosen application ) ( Examiner’s Note: “Training a regular deep network involves updating weight parameters using an optimization algorithm” reads on “training the at least one candidate sub-network for updating a model weight of the combined neural network until a quality of the combined neural network reaches a predetermined quality” ; “weights and architecture parameters are trained jointly” reads on “one trained result” ; “the sub-network corresponding to maximum architectural parameter values are selected…describes the final network architecture for the chosen application” reads on “obtain the dynamic neural network according to the at least one trained result” ). Claim 1 has similar limitations as claim 8. Therefore, the claim is rejected for the same reasons as above. Claim 3 has similar limitations as claim 10. Therefore, the claim is rejected for the same reasons as above. Claim 4 has similar limitations as claim 11. Therefore, the claim is rejected for the same reasons as above. Claim 5 has similar limitations as claim 12. Therefore, the claim is rejected for the same reasons as above. Claim 7 has similar limitations as claim 14. Therefore, the claim is rejected for the same reasons as above . Claim Rejections - 35 USC § 103 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-23-aia AIA The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 07-21-aia AIA Claim (s) 2 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bhardwaj et al (US 2023/0376745) in view of Cummings et al (US 2022/0335286) . As to claim 9, Bhardwaj et al teaches the search result. Bhardwaj et al fails to explicitly show/teach that the searched result is a pareto-front result. However, Cummings et al teaches a searched result is a pareto-front result ( paragraph [0060]…the performance evaluator circuitry 210 utilizes NAS generated data structures, such as the graphs 300, 310 of FIGS. 3A and 3B to generate a performance indicator representing objective design space for a given workload (e.g., herein referred to as a design space performance indicator). In some examples, the performance evaluator circuitry 210 generates a design space performance indicator for each workload and/or modality of interest. Examples discussed below assume that the design space performance indicator is a hypervolume indicator. However, the design space performance indicator can be an R2 indicator, a variance metric, and/or another goodness metric that considers a distribution of vectors in additional or alternative examples. In some examples, the design space performance indicator is a fitness function. When measuring two objectives (e.g., latency versus accuracy, etc.), a hypervolume represents an area of a Pareto front ). Therefore, it would have been obvious for one having ordinary skill in the art a the time the invention was made, for Bhardwaj et al’s searched result to be a pareto-front result, as in Cummings et al, for the purpose of generating design space performance indicators Claim 2 has similar limitations as claim 9. Therefore, the claim is rejected for the same reasons as above . 07-21-aia AIA Claim (s) 6 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bhardwaj et al (US 2023/0376745) in view of Thornton et al (US 11,586,875) . As to claim 13, Bhardwaj et al figure 1 - 4 shows and teaches the plurality of sub-networks. Bhardwaj et al fails to explicitly show/teach that the plurality of sub-networks has a DNA sequence, and the DNA sequence records a model architecture of said each of the plurality of sub-networks. However, Thornton et al teaches a plurality of sub-networks has a DNA sequence, and the DNA sequence records a model architecture of said each of the plurality of sub-networks. ( column 4, lines 1 - 40 teaches In various embodiments, meta parameters can include the total number of layers, the number of layers of a particular type (e.g., convolutional layers), or the ordering of layers. Meta parameters 112 are those parameters that govern the overall architecture of the model architecture 112 such as whether convolutional layers are always followed by pooling/max-pooling layers and how deep the model architecture 110 is. In some embodiments described in more detail below, the model architecture 110 can include sub-networks or network modules as building blocks. In such embodiments, the meta parameters 112 can also include module type for the sub-network and number of repetitions of the sub-network. Meta parameters 112 can also include relative preference values for sub-network types in some embodiments… A particular model architecture 110 is uniquely defined by a set of values for given meta parameters 112 and layer parameters 114 )( Examiner’s Note: “A particular model architecture 110 is uniquely defined by a set of values for given meta parameters 112 and layer parameters 114” reads on “the DNA sequence records a model architecture of said each of the plurality of sub-networks” ). Therefore, it would have been obvious for one having ordinary skill in the art a the time the invention was made, for Bhardwaj et al’s plurality of sub-networks has a DNA sequence, and the DNA sequence records a model architecture of said each of the plurality of sub-networks, as in Thornton et al, for the purpose of improving the accuracy of data models in detecting and classifying objects or patterns of interest. Claim 6 has similar limitations as claim 13. Therefore, the claim is rejected for the same reasons as above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON S COLE whose telephone number is (571)270-5075. The examiner can normally be reached Mon - Fri 7:30pm - 5pm EST (Alternate Friday's Off). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez can be reached at 571-272-2589 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000./BRANDON S COLE/ /BRANDON S COLE/ Primary Examiner, Art Unit 2128 Application/Control Number: 18/239,764 Page 2 Art Unit: 2128 Application/Control Number: 18/239,764 Page 4 Art Unit: 2128 Application/Control Number: 18/239,764 Page 5 Art Unit: 2128 Application/Control Number: 18/239,764 Page 6 Art Unit: 2128 Application/Control Number: 18/239,764 Page 7 Art Unit: 2128 Application/Control Number: 18/239,764 Page 8 Art Unit: 2128 Application/Control Number: 18/239,764 Page 9 Art Unit: 2128 Application/Control Number: 18/239,764 Page 10 Art Unit: 2128 Application/Control Number: 18/239,764 Page 11 Art Unit: 2128 Application/Control Number: 18/239,764 Page 12 Art Unit: 2128 Application/Control Number: 18/239,764 Page 13 Art Unit: 2128 Application/Control Number: 18/239,764 Page 14 Art Unit: 2128 Application/Control Number: 18/239,764 Page 15 Art Unit: 2128 Application/Control Number: 18/239,764 Page 16 Art Unit: 2128 Application/Control Number: 18/239,764 Page 17 Art Unit: 2128 Application/Control Number: 18/239,764 Page 18 Art Unit: 2128 Application/Control Number: 18/239,764 Page 19 Art Unit: 2128 Application/Control Number: 18/239,764 Page 20 Art Unit: 2128 Application/Control Number: 18/239,764 Page 21 Art Unit: 2128