Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Rejections under 35 U.S.C. §103
Applicant's arguments filed 1/28/26 have been fully considered but they are not persuasive.
“ … Neither reference, alone or in combination, generates post-compilation metrics for the binary files, e.g., a file size of the binary file, a number of strings included in the binary file, a page size of the build environment, a number or model of CPUs in the build environment, or CPU flags in the build environment, as presently recited in claim 1.” (pg. 10, 1st full par.)
“… These metrics are generated after compilation, by analyzing the binary files that were actually produced and the environments in which the binary files were generated … not by executing the binary files or measuring runtime performance.” (par. bridging pp. 10 and 11)
First, the claims do not recite when or how the metrics are generated. Accordingly, Pokorny’s “The recommended software-stack 106 may indicate … a certain number and type of processor” (par. [0017]) appears to address the generating metrics limitation as written. More specifically, it discloses a metric indicating “a number of CPUs in the build environment” (i.e. number of processors) and “a model of CPUs” (i.e. type of processor). Further the score upon which the selection is made is based, at least in part, on this metric (par. [0015] “the scoring function can take into account hardware … characteristics of … the target build environment 122”). Accordingly, Pokorny discloses generating at least one set of metrics as claimed.
Further, at least some of the specific metrics listed (e.g. number of CPUs, model of CPUs, CPU flags, number of strings) appear to be included, e.g., in a standard ELF header file (see e.g. pg. 1-4 “e_machine” and “e_flags”, pg. 1-17 “String Table”, also note applicant’s par. [00137]). Thus generating and retrieving these metrics after compilation by analyzing the binaries would require only an ordinary level of skill in the art. Accordingly, while any specific language will require search and consideration, it does not appear that limitations directed to a process like that disclosed, e.g., in applicant’s par. [00137] would represent a patentable distinction over the cited references.
“To the extent the Office relies on equivalence …” (pg. 11, 2nd full par.)
The office does not rely on equivalence any more than is required to compare two disclosures.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 10-15, 18-20 and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0053820 to Pokorny et al. (Pokorny) in view of US 7,293,257 to Czerwonka (Czerwonka).
Claim 1: Pokorny discloses a system comprising:
an electronic processor configured to:
obtaining a first set of source code (par. [0006] “a software item, such as a software application”);
obtaining a test compilation scenario, wherein the test compilation scenario specifies a respective build architecture (par. [0017] “recommended software-stack 106 can specify certain hardware components or software packages that are to be included in the target build environment 122”);
generating a respective binary file corresponding to each test compilation scenario of the plurality of test compilation scenarios to obtain a plurality of binary files (par. [0040] “the target software item in the target build environment 122 … compile source code”), wherein generating the respective binary file comprises:
instantiating a respective build environment according to the respective build architecture (e.g. par. [0023] “install the specified packages 110”, par. [0037] “configure the build process 108 so as to be perform on the target built [sic] environment 122”); and
compiling the first set of source code in the respective build environment to generate a respective binary file (par. [0040] “commanding a compiler to compile source code”);
generating a respective set of metrics for each respective binary file of the plurality of binary files to obtain a plurality of sets of metrics for the plurality of binary files (par. [0043] “determine performance metrics 128 for the software builds 124”), the respective set of metrics for each respective binary file comprising one or more of:
a file size of the respective binary file,
a page size of the respective build environment,
a number of CPUs in the respective build environment (par. [0017] “certain number … of processor”),
a model of CPUs in the respective build environment (par. [0017] “certain … type of processor”),
CPU flags in the respective environment, or
a number of strings included in the respective binary file;
comparing two or more sets of the plurality of sets of metrics, to select a particular binary file (par. [0016] “select one of the software stack candidates … based on its corresponding score … the highest score”, par. [0015] “the scoring function can take into account hardware and software characteristics of … the target build environment 122”);
determining a target test compilation scenario, of the plurality of test compilation scenarios, that is associated with the particular binary file (par. [0016] “select one of the software stack candidates”); and
generating a recommendation, for compiling a second set of source code, comprising the target test compilation scenario (par. [0016] “a recommended software stack”).
Pokorny does not explicitly disclose a plurality of test compilation scenarios, wherein each of the plurality of test compilation scenarios specifies a respective build architecture.
Czerwonka teaches a plurality of test compilation scenarios, wherein each of the plurality of test compilation scenarios specifies a respective build architecture (col. 9, 29-32 “repeats until all required combinations are covered … in the form of resultant set of test cases 321”).
It would have been obvious at the time of filing to obtain a plurality of test compilation scenarios for a plurality of build architectures. Those of ordinary skill in the art would have been motivated to do so to ensure that all combination are tested (e.g. Czerwonka col. 9, 29-32 “all required combinations are covered”).
Claim 2: Pokorny and Czerwonka teach the system of claim 1, wherein each of the plurality of test compilation scenarios further specifies a respective target architecture (Pokorny par. [0008] “recommended software-stack based on … a target computing environment in which the target software item is to … run”),
wherein compiling the first set of source code in the respective build environment comprises compiling the first set of source code for execution in the respective target architecture (Pokorny par. [0040] “commanding a compiler to compile source code”).
Claim 3: Pokorny and Czerwonka teach the system of claim 1, wherein each of the plurality of test compilation scenarios further specifies a respective build operating system and a respective target operating system (Pokorny par. [0008] “based on … a target computing environment in which the target software item is to be build or is to run”, par. [0017] “indicated … an operating system”),
wherein instantiating the respective build environment comprises instantiating the respective build environment using the respective build operating system (Pokorny par. [0017] “indicated … an operating system”),
wherein compiling the first set of source code in the respective build environment comprises generating the respective binary file for execution in the respective target operating system (Pokorny par. [0017] “required packages … proper operation at runtime”).
Claim 4: Pokorny and Czerwonka teach the system of claim 1, wherein each of the plurality of test compilation scenarios further specifies a respective compilation method, and
wherein compiling the first set of source code in the respective build environment comprises compiling the first set of source code using the respective compilation method (Pokorny par. [0020] “compiler flags or settings for use in compiling source code”).
Claim 5: Pokorny and Czerwonka teach the system of claim 4, wherein the respective compilation method includes one selected from a group consisting of a native compilation, an emulated compilation, and a cross-compilation (Pokorny par. [0040] “commanding a compiler to compile source code”, this appears to describe a native compilation).
Claim 6: Pokorny and Czerwonka teach the system of claim 1, wherein the electronic processor is further configured to:
obtaining a plurality of compute types (Pokorny par. [0017] “The recommended software-stack 106 number and type of processor… operating system”); and
obtaining a plurality of compilation methods (Pokorny par. [0020] “compiler flags or settings for use in compiling source code”),
wherein obtaining the plurality of test compilation scenarios comprises generating the plurality of test compilation scenarios based on permutations of the plurality of compute types and the plurality of compilation methods (Czerwonka col. 9, 29-32 “repeats until all required combinations are covered … in the form of resultant set of test cases 321”).
Claim 7: Pokorny and Czerwonka teach the system of claim 6, wherein each of the plurality of compute types includes a respective build architecture and a respective target architecture (Pokorny par. [0008] “recommended software-stack … based on … a target computing environment in which the target software item is to be built or is to run”) and wherein the plurality of compilation methods includes a native compilation, a emulated compilation, and a cross-compilation (Pokorny par. [0040] “commanding a compiler to compile source code”, this appears to describe a native compilation).
Claim 10: Pokorny and Czerwonka teach the system of claim 1, wherein each set of the plurality of set of metrics includes at least two selected from a group consisting of a file size of the respective binary file, a page size of the respective build environment, a number of CPUs in the respective build environment (Pokorny par. [0017] “certain number … of processor”), a model of CPUs in the respective build environment (Pokorny par. [0017] “certain … type of processor”), CPU flags in the respective build environment, and a number of strings included in the respective binary file.
Claim 11: Pokorny discloses a system comprising:
an electronic processor configured for:
receiving, via a user interface, user input indicating a build architecture to be used in compiling a set of source code (par. [0017] “recommended software-stack 106 can specify certain hardware components or software packages that are to be included in the target build environment 122”, par. [0010] “user-inputted restrictions”);
instantiating a build environment, wherein the build environment is instantiated according to the build architecture (e.g. par. [0023] “install the specified packages 110”, par. [0037] “configure the build process 108 so as to be perform on the target built [sic] environment 122”);
compiling the set of source code in the build environment to generate a respective binary file (par. [0040] “commanding a compiler to compile source code”);
generating a respective set of metrics for each respective binary file of the plurality of binary files to obtain a plurality of set of metrics for the plurality of binary files (par. [0043] “determine performance metrics 128 for the software builds 124”), the respective set of metrics for each respective binary file comprising one or more of:
a file size of the respective binary file,
a page size of the respective build environment,
a number of CPUs in the respective build environment (par. [0017] “certain number … of processor”),
a model of CPUs in the respective build environment (par. [0017] “certain … type of processor”),
CPU flags in the respective environment, or
a number of strings included in the respective binary file;
comparing two or more sets of the plurality of sets of metrics, to select a particular binary file (par. [0016] “select one of the software stack candidates … based on its corresponding score … the highest score”, , par. [0015] “the scoring function can take into account hardware and software characteristics of … the target build environment 122”); and
presenting, via the user interface, the particular binary file (par. [0027] “provide the performance metrics 128 or the feedback 130 to a user via a display”, it would at least have been obvious to include the recommendation in the feedback being displayed, to further inform the user of the outcome).
Pokorny does not disclose:
user input indicating a plurality of build architectures, and
instantiating a plurality of build environments.
Czerwonka teaches
input indicating a plurality of build architectures (col. 9, 29-32 “repeats until all required combinations are covered … in the form of resultant set of test cases 321”).
It would have been obvious at the time of filing to receive user input indicating a plurality of build architecture and instantiate the plurality of build architectures. Those of ordinary skill in the art would have been motivated to do so to ensure that all combination are tested (e.g. Czerwonka col. 9, 29-32 “all required combinations are covered”).
Claim 12: Pokorny and Czerwonka teach the system of claim 11, wherein the user input received via the user interface further indicates a plurality of target architectures for execution of the plurality of binary files (Pokorny par. [0008] “recommended software-stack based on … a target computing environment in which the target software item is to … run”).
Claim 13: Pokorny and Czerwonka teach the system of claim 11, wherein the user input received via the user interface further indicates a plurality of build operating systems, wherein each of the plurality of build environments is instantiated respectively according to the plurality of build operating systems (Pokorny par. [0008] “based on … a target computing environment in which the target software item is to be built”, par. [0017] “indicated … an operating system”).
Claim 14: Pokorny and Czerwonka teach the system of claim 11, wherein the user input received via the user interface further indicates a plurality of target operating systems, wherein each respective binary file is generated respectively according to the plurality of target operating systems (Pokorny par. [0008] “based on … a target computing environment in which the target software item … is to run”, par. [0017] “indicated … an operating system”).
Claim 15: Pokorny and Czerwonka teach the system of claim 11, wherein the user input received via the user interface further indicates a plurality of compilation methods, wherein compiling the first set of source code in each of the plurality of build environments comprises compiling the first set of source code respectively according to the plurality of compilation methods (Pokorny par. [0020] “compiler flags or settings for use in compiling source code”).
Claim 18: Pokorny discloses a method for testing binary files, the method comprising:
obtaining, with an electronic processor, a compute type and a compilation method, each of the plurality of compute types specifying respective build architecture (par. [0017] “recommended software-stack 106 can specify certain hardware components or software packages that are to be included in the target build environment 122”, par. [0020] “compiler flags or settings for use in compiling source code”);
generating, with the electronic processor, a test compilation scenario based on the compute type and the compilation method (par. [0017] “recommended software-stack 106 can specify certain hardware components or software packages that are to be included in the target build environment 122”),
wherein each of the plurality of test scenarios specifies a respective build architecture from the set of build architectures and a respective compilation method from the plurality of compilation methods (par. [0017] “recommended software-stack 106 can specify certain hardware components or software packages that are to be included in the target build environment 122”, par. [0020] “compiler flags or settings for use in compiling source code”); and
for each respective test compilation scenarios of the plurality of test scenarios:
instantiating a respective build environment according to the respective build architecture (e.g. par. [0023] “install the specified packages 110”, par. [0037] “configure the build process 108 so as to be perform on the target built [sic] environment 122”),
compiling a first set of source code in the respective build environment using the respective compilation method to generate a respective binary file (par. [0040] “commanding a compiler to compile source code”), and
generating a respective set of metrics for the respective binary file (par. [0043] “determine performance metrics 128 for the software builds 124”) to generate a plurality of sets of metrics, the respective set of metrics for each respective binary file comprising one or more of:
a file size of the respective binary file,
a page size of the respective build environment,
a number of CPUs in the respective build environment (par. [0017] “certain number … of processor”),
a model of CPUs in the respective build environment (par. [0017] “certain … type of processor”),
CPU flags in the respective environment, or
a number of strings included in the respective binary file;
comparing two or more sets of the plurality of sets of metrics, to select a particular binary file (par. [0016] “select one of the software stack candidates … based on its corresponding score … the highest score”, par. [0015] “the scoring function can take into account hardware and software characteristics of … the target build environment 122”); and
determining a target test compilation scenario, of the plurality of test compilation scenarios, that is associated with the particular binary file (par. [0016] “select one of the software stack candidates”); and
generating a recommendation, for compiling a second set of source code, comprising the target test compilation scenario (par. [0016] “a recommended software stack”).
Pokorny does not disclose:
obtaining a plurality of compute types
generating a plurality of test scenarios based on permutations of the plurality of compute types and the plurality of compilation methods.
Czerwonka teaches:
obtaining a plurality of compute types (col. 9, lines 26-29 “sequences in the list”)
generating a plurality of test scenarios based on permutations of the plurality of compute types (col. 9, 29-32 “repeats until all required combinations are covered … in the form of resultant set of test cases 321”).
It would have been obvious at the time of filing to obtain a plurality of compute types and compilation methods and generate a plurality of test scenarios based on the compute types and compilation methods. Those of ordinary skill in the art would have been motivated to do so to ensure that all combination are tested (e.g. Czerwonka col. 9, 29-32 “all required combinations are covered”).
Claim 19: Pokorny and Czerwonka teach the method of claim 18, wherein each of the plurality of compute types further specifies a respective target architecture and wherein compiling the first set of source code in the respective build environment includes generating the respective binary file for the respective target architecture (Pokorny par. [0008] “recommended software-stack based on … a target computing environment in which the target software item is to … run”, par. [0040] “commanding a compiler to compile source code”).
Claim 20: Pokorny and Czerwonka teach the method of claim 18, wherein each set of the plurality of sets of metrics includes at least two selected from a group consisting of a file size of the respective binary file, a page size of the respective build environment, a number of CPUs in the respective build environment, a model of CPUs in the respective build environment, CPU flags in the respective build environment, and a number of strings included in the respective binary file.
Claim 23: Pokorny and Czerwonka teach the system of claim 1, wherein the electronic processor is further configured for:
determining the plurality of test compilation scenarios, wherein determining the plurality of test compilation scenarios comprises:
identifying a plurality of candidate compute resources (Pokorny par. [0002] “a set of software components (e.g., packages)”);
identifying a plurality of candidate compilation methods (Pokorny par. [0009] “compiler flags, and other parameters”);
determining combinations of (a) a compute resource from the plurality of candidate compute resources and (b) a compilation method from the plurality of candidate compilation methods (Pokorny par. [0014] “a search on a search space containing many or all possible combination”); and
determining the plurality of test compilation scenarios based on the plurality of candidate compilation methods (Pokorny par. [0015] “using a scoring function to characterize each of the software stack candidates”).
Claim(s) 8-9 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0053820 to Pokorny et al. (Pokorny) in view of US 7,293,257 to Czerwonka (Czerwonka) in view of US 11,200,157 to Mathew et al. (Mathew).
Claim 8: Pokorny and Czerwonka teach the system of claim 1, wherein instantiating the respective build environment includes copying application dependencies of the first set of source code (Pokorny par. [0040] “download the specified packages 110”).
Pokorny does not disclose:
copying application dependencies of the first set of source code to a virtual machine.
Mathew teaches:
copying application dependencies of the first set of source code to a virtual machine (col. 4, lines 47-50 “control instantiation of one or more container instances 150 … within virtual computing environments 140”).
It would have been obvious at the time of filing to copy the packages to a virtual machine. Those of ordinary skill in the art would have been motivated to do so as a known target environment in which to test the software which would have produced only the expected results.
Claim 9: Pokorny and Czerwonka teach the system of claim 1, but does not disclose wherein instantiating the respective build environment includes instantiating a container.
Mathew teaches
instantiating the respective build environment includes instantiating a container (col. 4, lines 47-50 “control instantiation of one or more container instances 150 … to perform operations required to build … software modules”).
It would have been obvious at the time of filing to copy the packages to a virtual machine. Those of ordinary skill in the art would have been motivated to do so as a known target environment in which to test the software which would have produced only the expected results.
Claim 16: Pokorny and Czerwonka teach the system of claim 11, but do not explicitly disclose wherein the electronic processor is further configured to provide, via the user interface, a list of available build architectures and wherein the user input includes a selection of one or more build architectures from the list of available build architectures.
Mathew teaches providing, via a user interface a list of available build service configurations (col. 6 lines “User input … selected element from a list … configure operation of the build service 120 by selecting form configuration options”).
It would have been obvious at the time of filing to provide a list of available architectures for selection by the user. Those of ordinary skill in the art would have been motivated to do so as a known means of receiving user input which would have produced only the expected results (e.g. Pokorny par. [0010] “user-inputted restrictions”).
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0053820 to Pokorny et al. (Pokorny) in view of US 7,293,257 to Czerwonka (Czerwonka) in view of Official Notice.
Claim 17: Pokorny and Czerwonka teach the system of claim 11, wherein the electronic processor is configured to receive the user input indicating the plurality of build architectures (e.g. Pokorny par. [0010] “user-inputted restrictions”) but do not explicitly disclose receiving the user input via a dropdown menu included in the user interface.
It is officially noted that drop down menus were well known in the art.
It would have been obvious at the time to receive the user input via a drop down menu. Those of ordinary skill in the art would have been motivated to do so as a known means of receiving user input which would have produced only the expected result.
Claim(s) 21-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0053820 to Pokorny et al. (Pokorny) in view of US 7,293,257 to Czerwonka (Czerwonka) in view of US 2011/0022653 to Werth et al. (Werth).
Claim 21: Pokorny and Czerwonka teach the system of claim 1, but do not explicitly teach wherein based on the recommendation, the system automatically generates the particular binary file using a particular test compilation scenario without presenting the test compilation scenario to a user for user confirmation.
Werth teaches:
based on the recommendation, the system automatically provisions a particular scenario without presenting the test compilation scenario to a user for user confirmation (par. [0206] “a recommendation for a compilation of services … provision the remote service automatically upon receipt”).
It would have been obvious at the time of filing to automatically generate the binary file without user confirmation. Those of ordinary skill in the art would have been motivated to do so as a known means of implementing a recommendation which would have produced only the expected results.
Claim 22: Pokorny and Czerwonka teach the system of claim 1, but do not explicitly teach wherein the electronic processor is further configured for presenting the recommendation to a user and generating the particular binary file using a particular test compilation scenario based on user confirmation selecting the test compilation scenario.
Werth teaches:
presenting a recommendation to a user and provisioning a particular scenario based on user confirmation selecting the scenario (par. [0206] “a recommendation for a compilation of services … provision the remote service automatically upon receipt”).
It would have been obvious at the time of filing to generate the binary file based on user confirmation. Those of ordinary skill in the art would have been motivated to do so as a known means of implementing a recommendation which would have produced only the expected results.
Claim(s) 24-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0053820 to Pokorny et al. (Pokorny) in view of US 7,293,257 to Czerwonka (Czerwonka) in view of US 7,080,356 to Atallah et al. (Atallah).
Claim 24: Pokorny and Czerwonka teach claim 1, wherein generating the respective set of metrics for each respective binary file of the plurality of binary files comprises:
executing one or more binary analysis scripts on the respective binary file after compilation.
Atallah teaches:
generating metrics comprising executing one or more binary analysis scripts on the respective binary file after compilation (col. 3, lines 43-46 “PERL script that statically examines … binaries”).
It would have been obvious before the effective filing date of the claimed invention to execute a script on the binaries. Those of ordinary skill in the art would have been motivated to do so as a known means of gathering metrics which would have produced only the expected results.
Claim 25: Pokorny, Czerwonk and Atallah the system of claim 1, wherein generating the respective set of metrics for each respective binary file of the plurality of binary files comprises:
executing one or more executable and linkable format info scripts on the respective binary file after compilation (Atalla col. 3, lines 43-46 “ELF (Executable and Linking Format) binaries”).
Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0053820 to Pokorny et al. (Pokorny) in view of US 7,293,257 to Czerwonka (Czerwonka) in view of US 2015/0363192 to Sturtevant (Sturtevant) in view of “Tool Interface Standard (TIS) Executable and Linking Format (ELF) Specification” (ELF).
Claim 26: Pokorny and Czerwonka teach claim 1, but do not explicitly teach each set of the plurality of sets of metrics includes a file size of the respective binary file and a number of strings included in the respective binary file.
Sturtevant teaches:
metrics including a file size (par. [0130] “other metrices associated with files … file size”).
ELF teaches:
metrics including a number of strings (pp. 1-17 “String table sections hold null-terminated character sequences”, note it would at least have been within the ordinary level of skill and creativity to count the “nulls” and subtract 1).
It would have been obvious at the time of filing to generate metrics including at least two metrics. Those of ordinary skill in the art would have been motivated to do so to “provide a nuanced and complete view” of the testing (Sturtevant par. [0130], also see ELF pg. 1-1 “a “road map” describing the file’s organization”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON D MITCHELL whose telephone number is (571)272-3728. The examiner can normally be reached Monday through Thursday 7:00am - 4:30pm and alternate Fridays 7:00am 3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at (571)272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASON D MITCHELL/Primary Examiner, Art Unit 2199