DETAILED ACTION
The present application is being examined under the pre-AIA first to invent provisions.
Response to the communication dated 12/02/2025
Claims 1, 10, 19 are amended.
Claims 1 – 20 are presented for examination.
Continued Examination
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/02/2025 has been entered.
Response to Arguments
Double Patenting
The terminal disclaimer was accepted on 12/1/2025.
Claim Rejections - 35 USC § 103
The Applicant has amended the claims to recite “… wherein the in-memory model is database-neutral and distinct from a database engine of the target database…” and asserts that Tillias does not make such limitations obvious to those of ordinary skill in the art.
Therefore, at issue is whether the art of record makes obvious “… wherein the in-memory model is database-neutral and distinct from a database engine of the target database…”.
In response the argument is not persuasive.
A review of the specification shows that, a database neutral format is a way of representing database schema information or changes in a format that is not specific to any particular commercial database product or vendor (pp. 18, 22). This allows the same schema model or changes to be used with different types of databases, facilitating comparisons and migration across varied systems (p. 26).
The document provides an example of a change log in an XML format as one embodiment of a database-neutral format (p. 23). This XML defines the structure using generic elements like <createTable> and <column>, rather than specific SQL commands that might vary between database systems (pp. 18, 23).
Tillius_2012 uses a database neutral format to define the database schema and data deployment. The article demonstrates this using Liquibase changelog files written in XML (pp. 4-5).
The XML format used by Liquibase employs generic, abstract tags like <createTable> and <column> to describe the desired database structure (tables, columns, primary keys, foreign keys) rather than specific vendor-dependent SQL commands (p. 5). This allows the same master.xml file to be deployed to different database types, such as HSQLDB for testing and a production database, with the Liquibase tool translating the generic commands into the appropriate syntax for each target system (pp. 26-27).
The claim, when interpreted in view of the specification the XML changelog file is the model 206 of the specification.
Paragraph 69 states: “… an xml representation of database schema changes or other change data…”
Paragraph 77 states: “… a changeset may be equivalent to one or more SQL statements, but represented in a database-neutral format…”
Paragraph 91 of the specification states: “… existing model 206 of a database schema or receive a set of schema information 202 and build model 206 of the database schema… schema information 202 may include a set of snapshot changesets specifying the changes that would be required to create the database schema from an empty schema. Simulation service 200 can map the schema information 202 to model 206. Preferably, model 206 may be a database neutral model representing a schema, tables in the schema, columns, constraints or other database objects…”
Paragraph 96 states a change log can be persisted as an XML file… the following provides one example of a change log…
Accordingly, the model 206 is a unified or abstract representation of a database schema that maps data between a specified database management system (DBMS) schema and object classes. The XML changelog acts as this database-neutral, platform-agnostic model. It uses abstract "change types" like <createTable> that describe what should be done, rather than using raw, vendor-specific SQL commands. The changelogs of Tillius_2012 serve the same purpose as outlined in the instant specification and allow the simulation service tool to generate the correct database-specific SQL at run-time for the target database (e.g., HSQLDB, Oracle, MySQL, etc.).
Further, as outlined in the prior Office action, McGarr_2010 also teaches “… developers define their desired database changes in XML files. The XML file, called a changelog… that define a desired database change in a database agnostic abstraction…”
Therefore, Tillius_2012 clearly demonstrates the use of an in-memory vendor-neutral model in XML and McGarr_2010 explicitly recites “database agnostic abstraction”.
Accordingly, the Examiner finds that Tillius_2012 and McGarr_2010 ,contrarily to the Applicant’s arguments, do in fact make the amended claim elements obvious to those of ordinary skill in the art. The arguments are not persuasive and the rejection is maintained.
Response to Arguments
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 19, 20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Tillias_2012 (Unit Testing and Integration Testing using Junit, Liquibase, HSQLDB, Hibernate, Maven and Spring Framework, Tillias’s Blog November 10, 2012) in view of McGarr_2010 (Manage Database Changes with Liquibase, Early and Often www.MikeMcGarr.com June 28, 2010) in view of Dzone_2010 (Continuous Integration: Patterns and Anti-Patterns by Paul M. Duvall, DZone Refcardz, 2010) in view of Huang_2007 (US 2007/0112886 A1) in view of Vijayasylvester_2010 (How to fetch the row count for all tables in an SQL Server database, 2/8/2010).
Claim 19. Tillias_2012 makes obvious “A c method comprising:
accessing a changelog, the changelog comprising representing a baseline state of a target database and proposed changes to the target database in a [XML] format; (page 1: “… database schema/data deployment will be created using Liquibase…”; page 4: “… Liquibase will be using master.xml database changelog…” NOTE: while XML is an industry standard format, Tillias_2012 does not explicitly state that this is “database neutral”) building an in-memory model of a database schema of the target database based on the changelog, the in-memory model of the database schema comprising a set of model objects which are related according to the database schema, wherein the in memory model is[XML] and distinct from a database engine of the target database (page 1: “…we will create maven project, Liquibase schema/data deployment to the in-memory database and run integration test against the database… HSQLDB… to test schema/data deployment… database schema/data deployment will be created using Liquibase… and in-memory database (HSQLDB in memory mode)…”; page 4 - 5: Liquibase changelog files written in XML. The XML format used by Liquibase employs generic, abstract tags like <createTable> and <column> to describe the desired database structure (tables, columns, primary keys, foreign keys) rather than specific vendor-dependent SQL commands. This allows the same master.xml file to be deployed to different database types, such as HSQLDB; page 6 illustrates the schema; page 24: “… we’ll be using HSQLDB in-memory mode…”) accessing a set of [expected values] (page 27 unit/integration test as part of “foo()”); simulating an application of the proposed changes to the target database to create an updated version of the in-memory model, the simulating comprising; selecting a change from the proposed changes as a selected change; and mapping the selected change to a command on the in-memory model to update the in-memory model according to the selected change (pages 4 – 8: “… using master.xml database changelog… that schema has been deployed… lets add some data using separate changesets inside master.xml Page 9 illustrates the mapping of the change set to commands. NOTE: these changes are applied to a “sandbox” database which simulates a production database. The integration testing simulates production database use in the sandbox database) applying the set of [expected values] to the updated version of the in-memory model to determine that the selected change violates the set of [expected values] (page 27 unit/integration test as part of “foo()” sb.append) generating a forecast report based on a result of the simulating, wherein the forecast report indicates: a prediction of failure, or a performance impact of an implementation of the set of proposed changes, the performance impact being predicted based at least in part on the collected database profile information (page 27 unit/integration test as part of “foo()” string actualvalue = service.getvalue() assert.asssertEquals (“Service returns stat that isn’t deployed from \” actualValue); page 28 illustrates a graphical output that is capable of providing error notifications, failure notifications. NOTE: the claimed “or a performance impact…” is claimed in the alternative and therefore is not required by the claim.).
While Tillias_2012 clearly teaches changelogs that include changesets with the changelogs in an XML format, and while it may properly be found that, to one of ordinary skill in the art, XML is “database neutral” because XML is a profile of an ISO standard, Tillias_2012 does not EXPLICLTY recites “database neutral.” Therefore, Tillias_2012 does not EXPLICITLY recites “database neutral.”
While Tillias_2012 teaches to perform integration testing which involves comparing simulation results to expectations, Tillias_2012 does not EXPLICITLY recites that integration testing expectations are rules. Therefore Tillias_2012 does not EXPLICITLY illustrate accessing as set of “rules” nor applying the set of “rules.” Therefore, while Tillias_2012 clearly teaches to determine database changes may result in error and failures, which clearly makes violations obvious to those of ordinary skill in the art, Tillias_2012 does not EXPLICITLY recite that the violation (i.e., errors/failures) are EXPLICITLY violations of “rules.”.
Further, while Tillias_2012 clearly illustrates executing a method that involves executing software and explicitly displays computer user interfaces (page 2, 3, 7, 8, 12, 28), and while this may properly be found to make it obvious to those of ordinary skill in the art that the method is a “computer implemented” one, Tillias_2012 does not EXPLICITLY recite “computer implemented.”
McGarr_2010; however, makes obvious “database neutral” (page 1: “… Liquibase is an opensource database change management tool… developers define their desired database changes in XML files. The XML file, called a changelog… contains a list of changesets… that define a desired database change in a database agnostic abstraction. The changelog is intended to contain an evolving lost of database changes the team would like to apply to a target database… Liquibase will apply the changesets direction to the database…”).
Tillias_2012 and McGarr_2010 are analogous art because they are from the same field of endeavor called databases. Before the effective filing date of the invention it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and McGarr_2010. The rationale for doing so would have been that Tillias_2012 teaches to use Liquibase changelogs in XML format and McGarr_2010 teaches that the XML format of Liquibase changelogs is known to be database agnostic. Therefore, it would have been obvious to combine Tillias_2012 and McGarr_2010 for the benefit of using the open-source Liquibase changelogs with any database to obtain the invention as specified in the claims.
While Tillias_2012 teaches to perform integration testing which involves comparing simulation results to expectations, Tillias_2012 does not EXPLICITLY recites that integration testing expectations are rules. Therefore Tillias_2012 does not EXPLICITLY illustrate accessing as set of “rules” nor applying the set of “rules.” Therefore, while Tillias_2012 clearly teaches to determine database changes may result in error and failures, which clearly makes violations obvious to those of ordinary skill in the art, Tillias_2012 does not EXPLICITLY recite that the violation (i.e., errors/failures) are EXPLICITLY violations of “rules.”.
Dzone_2010 makes obvious to “access a asset of rule” and “apply a set of rules” and to determine violations to “the set of rules” while performing integration testing (page 5: “Continuous Inspection… analysis to find common problems. Have these tools run as part of continuous integration or periodic builds… rules.xml… failOnViolation… out =”${checkstyle.report.file}”…”).
Tillias_2012 and Dzone_2010 are analogous art because they are from the same field of endeavor called integration testing and/or databases and/or software. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and Dzone_2010.
The rationale for doing so would have been that Tillias_2012 teaches to perform integration testing and Dzone_2010 teaches to include rules as part of the integration testing to prevent common errors and find violations of those rule which prevent errors. Therefore, it would have been obvious to combine Tillias_2012 and Dzone_2010 for the benefit of improving integration testing to obtain the invention as specified in the claims.
Huang_2007 makes obvious a “computer implemented” method (abstract: “a computer implemented method, apparatus, and computer usable program code to” evaluate changes in a database.).
Tillias_2012 and Huang_2007 are analogous art because they are from the same field of endeavor called databases. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and Huang_2007. The rationale for doing so would have been that Tillias_2012 teaches to perform a testing method on a database and illustrates graphical user interfaces and commands lines used to perform that testing and Huang_2007 teaches to use a computer to implement testing on a database. Therefore, it would have been obvious to combine Tillias_2012 and Huang_2007 for the benefit of having a computer upon which to perform the method of Tillias_2012 to obtain the invention as specified in the claims.
Vijayasylvester_2010 makes obvious “Collecting database profile information from the target database, wherein the database profile information comprises row counts for one or more tables of the target database” (page 1: Vijaysylvester asks how to get row counts for all tables from an SQL database. Adrianbanks provided an SQL script that gets the table_name and row_count for each table and outputs a list of tables and the row count for each table.)
Tillias_2012 and Vijayasylvester_2010 are analogous art because they are from the same field of endeavor called finite database. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and Vijayasylvester_2010. The rationale for doing so would have been that Tillias_2012 teaches to have a SQL database with tables and a schema. Vijayasylvester_2010 teaches to determine the row count in order to know if there is any data in a table for the purpose of re-incarnating the database. Therefore, it would have been obvious to combine Tillias_2012 and Vijayasylvester_2010 for the benefit of knowing which tables have data for the purpose of re-incarnating tables to obtain the invention as specified in the claims.
Claim 20. Tillias_2012 also makes obvious “further comprising automatically generating an alert based on a determination that the result of simulating the application of the proposed changes includes an error indicative of failure” (page 28 illustrates a graphical interface that indicates errors and failures).
Huang_2007 also makes obvious “further comprising automatically generating an alert based on a determination that the result of simulating the application of the proposed changes includes an error indicative of failure” (par 68: “… an incompatable target database model… with this situation, an error is generated…”; par 97: “… determine if errors are present… an error is generated if a mapping is present because the structure is already present and cannot be created again..”; Fig 15; NOTE: the errors are with regard to various rules see, for example, par 105 – 108 which are written in XML).
Claims 1, 10, 3, 12, 5, 14, 6, 15, 7, 16, 2, 11, 8, 17 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Tillias_2012 in view of McGarr_2010 in view of Dzone_2010 in view of Checkstyle_2013 (downloaded from Wikipedia archive dated 1/25/2013) in view of Huang_2007 in view of Vijayasylvester_2010 (How to fetch the row count for all tables in an SQL Server database, 2/8/2010).
Claim 1. Tillias_2012 makes obvious opening a connection to a target database; collecting a snapshot of a current state of a database schema for the target database (page 1- 3: create a maven project, create a sandbox database with HSQLDB, Liquibase integration) receiving a set of proposed changes to the database schema of the target database (page 1: “… database schema/data deployment will be created using Liquibase…”; page 3: Liquibase integration; page 4: “… database schema / data deployment… Liquibase will be using master.xml database changelog…”; page 5: illustrates changelog deployment; page 7: “… schema has been deployed… ass some data using separate changeset inside master.xml…”); building an in-memory model of the database schema of the target database based on the snapshot and the changes, the in-memory model of the database schema comprising a set of model objects which are related according to the database schema, wherein building the in- memory model comprises loading an initial model in memory and applying the first set of changes starting with the initial model, wherein the in memory model is[XML] and distinct from a database engine of the target database (page 1: “…we will create maven project, Liquibase schema/data deployment to the in-memory database and run integration test against the database… HSQLDB… to test schema/data deployment… database schema/data deployment will be created using Liquibase… and in-memory database (HSQLDB in memory mode)…”; page 4 - 5: Liquibase changelog files written in XML. The XML format used by Liquibase employs generic, abstract tags like <createTable> and <column> to describe the desired database structure (tables, columns, primary keys, foreign keys) rather than specific vendor-dependent SQL commands. This allows the same master.xml file to be deployed to different database types, such as HSQLDB; page 6 illustrates the schema; page 24: “… we’ll be using HSQLDB in-memory mode…”) accessing a set of [expected values] (page 27 unit/integration test as part of “foo()”); simulating an application of the set of proposed changes to the target database to create an updated version of the in-memory model, the simulating comprising; selecting a change from the set of proposed changes as a selected change; and mapping the selected change to a command on the in-memory model to update the in-memory model according to the selected change (pages 4 – 8: “… using master.xml database changelog… that schema has been deployed… lets add some data using separate changesets inside master.xml Page 9 illustrates the mapping of the change set to commands. NOTE: these changes are applied to a “sandbox” database which simulates a production database. The integration testing simulates production database use in the sandbox database); applying the set of[expected values] to the updated version of the in-memory model to determine if the set of proposed changes violates the set of [expected values] and, logging an error in association with the selected change and a state of the in-memory model” (page 27 unit/integration test as part of “foo()” string actualvalue = service.getvalue() assert.asssertEquals (“Service returns stat that isn’t deployed from \” actualValue); page 28 illustrates a graphical output that is capable of providing error notifications, failure notifications, and error trace);
generating a forecast report based on a result of the simulating, wherein the forecast report indicates: a prediction of failure, or a performance impact of an implementation of the set of proposed changes, the performance impact being predicted based at least in part on the collected database profile information (page 27 unit/integration test as part of “foo()” string actualvalue = service.getvalue() assert.asssertEquals (“Service returns stat that isn’t deployed from \” actualValue); page 28 illustrates a graphical output that is capable of providing error notifications, failure notifications
NOTE: the claimed “or a performance impact…” is recited in the alternative and therefore the limitations are not required by the claim.)
while Tillias_2012 clearly illustrates executing a method that involves executing software and explicitly displays computer user interfaces (page 2, 3, 7, 8, 12, 28), and while this may properly be found to make it obvious to those of ordinary skill in the art that the method is performed by “A computer program product comprising a non-transitory, computer-readable medium storing computer-executable instructions, the computer-executable instructions comprising instructions for”, Tillias_2012 does not explicitly recite that the computer programs comprise non-transitory, computer-readable medium storing computer-executable instructions. Therefore, Tillias_2012 does not EXPLICITLY recite “A computer program product comprising a non-transitory, computer-readable medium storing computer-executable instructions, the computer-executable instructions comprising instructions for.”
Tillias_2012 does not explicitly recite “Collecting database profile information from the target database, wherein the database profile information comprises row counts for one or more tables of the target database”
While Tillias_2012 is clearly teaching integration testing that is performed prior to deployment into the intended production environment, and while this may properly be found to make obvious to those of ordinary skill in the art “and deploying the set of proposed changes to a target environment if the result of simulating the application of the set of proposed changes does not include at least one error indicative of failure”, because when there are no errors found during integration testing this indicates that database should also work properly in the intended production environment and therefore it is safe to deploy the database for production use. Nevertheless; Tillias_2012 does not EXPLICITLY recite “and deploying the set of proposed changes to a target environment if the result of simulating the application of the set of proposed changes does not include at least one error indicative of failure”
While Tillias_2012 teaches to perform integration testing which involves comparing simulation results to expectations, Tillias_2012 does not EXPLICITLY recites that integration testing expectations are rules. Therefore Tillias_2012 does not EXPLICITLY illustrate accessing a set of “rules comprising rules applied by the target database” nor applying the set of “rules.” Further, while Tillias_2012 clearly teaches logging an error in association with the selected change and a state of the in-memory model, Tillias_2012 does not EXPLICITLY recite that this is done “based on a determination that the command violates the set of rules.”
McGarr_2010; however, makes obvious “and deploying the set of proposed changes to a target environment if the result of simulating the application of the set of proposed changes does not include at least one error indicative of failure” (page 2: “… developers test their code changes locally on their own machine before checking code in, you want them to be able to test their database changes…
By integrating a continuous integration server, you can have your development database updated on a nightly basis, integrating all changes checked in that day… our Liquibase process runs before the integration tests that depend on these changes. If our build fails for whatever reason, we don’t want the database changes to have been migrated to our development database. If the build is successful, then we re-run Liguibase, this time pointing it at the development database. This takes longer but ensure that changes migrated to development database will work…”).
Tillias_2012 and McGarr_2010 are analogous art because they are from the same field of endeavor called database testing/management. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and McGarr_2010. The rationale for doing so would have been that Tillias_2012 teaches to perform integration testing and to check for errors in the database build. McGarr_2010 teaches that when making changes to a database you should test them before checking the code into the revision system. McGarr_2010 also teaches to use continuous integration testing and to integrate changes that have been checked (i.e., verified). McGarr_2010 also states that if a build fails “we don’t want the database changes to have been migrated to our development database” but “if the build is successful” they migrate /deploy the changes for the purpose of “[ensuring] that changes migrated to the development database will work.” Therefore, it would have been obvious to combine Tillias_2012 and McGarr_2010 for the benefit of ensuring that changes will work after checked in, migrated, or deployed to obtain the invention as specified in the claims.
Additionally, McGarr_2010 on page 2 teaches to follow “best practices” and cites the Liquibase Practices documentation and also teaches to be consistent with naming conventions, stating: “… Be Consistent about Naming! – Liquibase doesn’t force you to name every constraint you define, which allows you to leave it up to the database to autogenerate a name. This can get you into trouble, especially if some of your constraint names are explicitly defined and use the same naming convention as the database…”
Therefore, McGarr_2010 teaches the concept of best practice rules and also illustrates a scenario in which constraints, which are “rules comprising rules applied by the target database” and further illustrates how a database “command violates the set of rules.”
Nevertheless, Tillias_2012 and McGarr_2010 does not EXPLICITLY use the word “rules” nor illustrate logging errors “based on a determination that the command violates the set of rules” nor “Collecting database profile information from the target database, wherein the database profile information comprises row counts for one or more tables of the target database”
Dzone_2010, however, makes obvious to “access a set of rules” and to “apply a set of rules” and to determine violations to “the set of rules” while performing integration testing (page 5: “Continuous Inspection… analysis to find common problems. Have these tools run as part of continuous integration or periodic builds… rules.xml… failOnViolation… out =”${checkstyle.report.file}”…”).
Tillias_2012 and Dzone_2010 are analogous art because they are from the same field of endeavor called integration testing and/or databases and/or software. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and Dzone_2010.
The rationale for doing so would have been that Tillias_2012 teaches to perform integration testing and Dzone_2010 teaches to include rules as part of the integration testing to prevent common errors and find violations of those rule which prevent errors. Therefore, it would have been obvious to combine Tillias_2012 and Dzone_2010 for the benefit of improving integration testing to obtain the invention as specified in the claims.
While Dzone_2010 teaches to apply rules and to check for rule violations, and while McGarr_2010 does teach constraints, which are “rules comprising rules applied by the target database” and further teaches naming convention rules and illustrates how a database naming conventions “command violates the set of rules” (see above), Dzon_2010 does not EXPLICITLY teach that the rules may be, for example, naming convention rules.
Checkstyle_2012, however, makes obvious that the accessed and applied rules may be custom rules and include naming convention rules (page 1: “… improve software quality… Checkstyle can perform a series of automated programming style checks. These checks can be enabled or disabled individually, as well as configured for the programming style defined in the project being checked. Failure of a check results in an error or warning… Checksyle can be extended with custom checks…”; page 2: “… naming conventions – checks for compliance with defined naming conventions…” NOTE: while naming convention is a module that comes with the tool, and while other standard modules are included such as “duplicate code”, because Checkstyle can be extended with custom checks and configured for the programming of the project the type of checking/rules can be used for rules other than just naming conventions.)
Dzone_2010 and Checkstyle_2012 are analogous art because they are from the same field of endeavor called software testing. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Dzone_2010 and Checkstyle_2012. The rationale for doing so would have been that Dzone_2010 teaches to perform continuous integration testing using checkstyle teaches rules and Checkstyle_2012 teaches that rules can include naming convention rules or any other custom rules desired. Therefore, it would have been obvious to combine Dzone_2010 and Checkstyle_2012 for the benefit of being able to check customer rules as well as naming conventions to obtain the invention as specified in the claims.
Furthermore, McGarr_2010 and Dzone_2010 are analogous art because they are from the same field of endeavor called software testing. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine McGarr_2010 and Dzone_2010. The rational for doing so would have been that McGarr_2010 teaches to perform continuous integration testing and also teaches to follow best practices (see McGarr_2010 page 2). Dzone_2010 teaches to perform continuous integration testing and explicitly illustrates using Checkstyle (see Dzone_2010 page 5). Therefore, it would have been obvious to those of ordinary skill in the art to combine McGarr_2010 and Dzone_2010 for the benefit of performing integration testing/continuous integration testing/inspection of best practices including naming conventions to ensure the software doesn’t violate such rules to obtain the invention as specified in the claim.
Therefore, the combination of Tillias_2012 and McGarr_2010 and Dzone_2010 and Checkstyle_2013 makes obvious “rules comprising rules applied by the target database” and “applying the set of rules to the updated version of the in-memory model to determine if the set of proposed changes violates the set of rules and, based on a determination that the command violates the set of rules logging an error in association with the selected change and a state of the in-memory model.”
Huang_2007 makes obvious “A computer program product comprising a non-transitory, computer-readable medium storing computer-executable instructions, the computer-executable instructions comprising instructions for” (par 114: “… the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system… computer readable medium can be any tangible apparatus that can contain, store, communicate, propagate, or transport the program…”).
Tillias_2012 and Huang_2007 are analogous art because they are from the same field of endeavor called databases. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and Huang_2007. The rationale for doing so would have been that Tillias_2012 teaches to perform a testing method on a database and illustrates graphical user interfaces and commands lines used to perform that testing and Huang_2007 teaches to use a computer to implement testing on a database. Therefore, it would have been obvious to combine Tillias_2012 and Huang_2007 for the benefit of having a computer upon which to perform the method of Tillias_2012 to obtain the invention as specified in the claims.
Vijayasylvester_2010 makes obvious “Collecting database profile information from the target database, wherein the database profile information comprises row counts for one or more tables of the target database” (page 1: Vijaysylvester asks how to get row counts for all tables from an SQL database. Adrianbanks provided an SQL script that gets the table_name and row_count for each table and outputs a list of tables and the row count for each table.).
Tillias_2012 and Vijayasylvester_2010 are analogous art because they are from the same field of endeavor called finite database. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and Vijayasylvester_2010. The rationale for doing so would have been that Tillias_2012 teaches to have a SQL database with tables and a schema. Vijayasylvester_2010 teaches to determine the row count in order to know if there is any data in a table for the purpose of re-incarnating the database. Therefore, it would have been obvious to combine Tillias_2012 and Vijayasylvester_2010 for the benefit of knowing which tables have data for the purpose of re-incarnating tables to obtain the invention as specified in the claims.
Claim 10. The limitations of claim 10 are substantially the same as those of claim 1 and are therefore rejected due to the same reasons as outlined above for claim 1. Additionally, Huang_2007 makes obvious the further limitations of “A computer-implemented method” (abstract: “a computer implemented method, apparatus, and computer usable program code to” evaluate changes in a database.).
Claims 3, 12. While Tillias_2012 clearly teaches to use HSQLDB which is an SQL database system that has three tables (Users, Roles, Users_roles) in the schema.
Vijayasylvester_2010 makes obvious “wherein the in-memory model of the database schema incorporates at least a portion of the database profile information” (page 1: Vijaysylvester asks how to get row counts for all tables from an SQL database. Adrianbanks provided an SQL script that gets the table_name and row_count for each table and outputs a list of tables and the row count for each table. NOTE: because the script is able to get the table names, this demonstrates profile data wherein the in-memory model of the database schema incorporates at least a portion of the database profile information. This is because the table name are part of the schema itself.)
Claims 5, 14. Chekstyle_2013 makes obvious “wherein the set of rules further comprises one or more additional user defined rules” (page 1: “… extended with custom checks…”).
Claim 6, 15. Dzone_2010 makes obvious “wherein simulating the application of the set of proposed changes further comprises manipulating the in-memory model according to the selected change to update the in-memory model of the database schema even if the application of the selected change would result in an error” (page 5: “failOnViolation = “false”).
Huang_2007 also makes obvious “wherein simulating the application of the set of proposed changes further comprises manipulating the in-memory model according to the selected change to update the in-memory model of the database schema even if the application of the selected change would result in an error” (FIG. 15 which illustrates simulating all elements in a loop even when an error is identified and then after all errors are identified presenting the results to the user).
Claims 7, 16 Tillias_2012 makes obvious “wherein simulating the application of the set of proposed changes to the in-memory model of the database schema further comprises iteratively repeating the selecting, mapping and determining for each change in the set of proposed changes until all the changes in the set of proposed changes have been used as the selected change” (page 27: foo() perform a set of tests).
Dzone_2010 also makes obvious “wherein simulating the application of the set of proposed changes to the in-memory model of the database schema further comprises iteratively repeating the selecting, mapping and determining for each change in the set of proposed changes until all the changes in the set of proposed changes have been used as the selected change” (page 5: “failOnViolation = “false”).
Huang_2007 also makes obvious “wherein simulating the application of the set of proposed changes to the in-memory model of the database schema further comprises iteratively repeating the selecting, mapping and determining for each change in the set of proposed changes until all the changes in the set of proposed changes have been used as the selected change” (FIG. 15 which illustrates simulating all elements in a loop until everything as been tested.).
Claims 2, 11. Tillias_2012 makes obvious “wherein the set of model objects comprises: one or more table objects representing tables of the database; one or more column objects representing columns of the database, each column object related in the in-memory model to a corresponding table object according to the database schema; one or more primary key constraint objects representing primary key constraints, each primary key constraint object related in the in-memory model to a column object representing a column of the database to which a corresponding primary key constraint applies; one or more foreign key constraint objects representing foreign key constraints, each foreign key constraint object related in the in-memory model to a column object representing a column of the database to which a corresponding foreign key constraint applies; and one or more data constraint objects representing data constraints, each data constraint object related in the in-memory model to a column object representing a column to which a corresponding data constraint applies” (page 5 illustrates a schema with constraints for primary keys, foreign keys, there are tables and columns. Page 6 illustrates a graphical representation of the schema. The schema is in the sandbox database (HSQLDB in-memory model).).
Claim 8, 17. Tillias_2012 teaches to perform unit testing/integration testing by creating an in-memory model of the database (i.e., schema with data) and determining whether errors and failures occur when commands are used that modify the in-memory model of the database. See for example, page 26 – 27.
Additionally, Dzone_2010 teaches to perform continuous integration testing which “is the process of building software with every change committed to a project’s version control repository”. See page 1 where the integration build generates feedback. This teaches to generate feedback by actually executing commands that perform modifications in the software build.
In combination with Dzone_2010 and Tillias_2012 makes obvious that modifications that are integrated into the software build may be changes that modify an in-memory model of the database and that feedback regarding errors and failures that occur because of these modifications may be obtained by performing these modifications to the in-memory model.
Additionally, McGarr_2010 on page 2 teaches that “you want [developers] to be able to test their database changes” and also teaches to perform integration testing and teaches user specified rules known as “best practices” and further teaches that “you can get into trouble, especially if some of your constraint names are explicitly defined and use the same naming convention as the database” which makes obvious to have a constraint based rule regarding naming conventions and also makes obvious to perform testing of these rules against modification to names so that you don’t “get in trouble.”
Tillias_2012 further teaches to have an in-memory model of a database schema that include primaryKeyName and foreignKeyName constraints. See page 5.
In combination, Tillias_2012 and Dzone_2010 and McGarr_2010 make obvious to perform integration testing on an in-memory model of a database schema that include at least primaryKeyName and foreignKeyName constraints and to get feedback concerning errors/failures with regard to naming convention rules by performing modifications to the in-memory model of the database schema by executing instructions that perform name changes for the purpose of obtaining feedback to make sure name changes do not “get you into trouble.”
Therefore, the prior art, in combination, makes obvious “wherein the computer-executable instructions comprise instructions for determining that the command violates the set of rules based on the command modifying the in-memory model of the database schema in violation of a set of constraints modeled in the in-memory model of the database schema.”
Claims 4, 13 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Tillias_2012 in view of McGarr_2010 in view of Dzone_2010 in view of Checkstyle_2013 in view of Huang_2007 in view of Vijayasylvester_2010 in view of StackOverFlow_2012 (Using Liquibase on existing schema, 8/2/2011 answered on 10/24/2011).
Claims 4, 13. StackOverFlow_2012 makes obvious “wherein collecting the snapshot of the current state of a database schema for the target database comprises querying the target database for the current state of the database schema” (page 1: “… you can generate a changelog.xml form an existing schema… generate a changelog from your existing schema. The Liquibase CLI can do that for you…”).
Tillias_2012 and StackOverFlow_2012 are analogous art because they are from the same field of endeavor called databases. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and StackOverFlow_2012. The rationale for doing so would have been that Tillias_2012 teaches to use Liquibase and StackOVerFlow_2012 teaches features of Liquibase that allow the user to take a snapshot of the database. Therefore, it would have been obvious to combine Tillias_2012 and StackOverFlow_2012 for the benefit of using the features that are available in software to obtain the invention as specified in the claims.
Claims 4, 13 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Tillias_2012 in view of McGarr_2010 in view of Dzone_2010 in view of Checkstyle_2013 in view of Huang_2007 in view of Vijayasylvester_2010 in view of SNAPSHOT_2009 (custom drop-schema command? Nvoxland Sep 2009)
Claims 4, 13. SNAPSHOT_2009 makes obvious “wherein collecting the snapshot of the current state of a database schema for the target database comprises querying the target database for the current state of the database schema” ( page 3: “… there is a generateChangeLog command that you can use to take a snapshot of a database schema (including data if need be) and create a starting changelong file…”).
Tillias_2012 and SNAPSHOT_2009 are analogous art because they are from the same field of endeavor called databases. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and SNAPSHOT_2009.The rationale for doing so would have been that Tillias_2012 teaches to use Liquibase and SNAPSHOT_2009 teaches features of the Liquibase software. Therefore, it would have been obvious to combine Tillias_2012 and SNAPSHOT_2009 for the benefit of using features of the Liquibase software that existed in the software builds to obtain the invention as specified in the claims.
Claims 9, 18 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Tillias_2012 in view of McGarr_2010 in view of Dzone_2010 in view of Checkstyle_2013 in view of Huang_2007 in view of Vijayasylvester_2010 in view of Voxland_2010 (Liquibase Formatted SQL, May 12, 2010).
Claims 9, 18. Tillias_2012 clearly teaches for Liquibase to be integrated with HSQLDB which may properly be found to make obvious to those of ordinary skill in the art “wherein deploying the set of proposed changes to the target environment comprises issuing SQL commands to the target environment” because HSQLDB includes an SQL command line and HSQLDB supports ANSI-92 SQL which is an industry standard. Nevertheless; Tillias_2012 does not EXPLICITY state Liquibase XML changelog commands generate SQL.
Voxland_2010; however, EXPLICITLY teaches that Liquibase generates SQL for the target database from the changes listed in the changelog. See page 1: “… Note that this is specifically raw SQL, not abstracted Liquibase changes liked “createTable” that generate different SQL depending on the target database…” While Voxland_2010 makes this teaching in the context of Formatted SQL, which is separate feature of Liquibase, the context of this teaching does not diminish the fact that, when Liquibase Formatted SQL is not being used in the changelog, Liquibase will generate SQL for the target database according to the abstracted commands found in the XML changelog file.
Tillias_2012 and Voxland_2010 are analogous art because they are from the same field of endeavor called databases. Before the effective filing date it would have been obvious to a person of ordinary skill in the art to combine Tillias_2012 and Voxland_2010. The rationale for doing so would have been that Tillias_2012 teaches to use Liquibase changelogs in XML format. Voxland_2010 teaches that commands in the Liquibase changelogs generate database specific SQL. Therefore, The Liquibase integration with HSQLDB taught by Tillias_2012 results in SQL being issued for the target HSQLDB target environment. Therefore, it would have been obvious to combine Tillias_2012 and Voxland_2010 for the benefit of having features that are the basic features of Liquibase which Tillias_2012 teaches to use and also to have the SQL that HSQLDB requires to obtain the invention as specified in the claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN S COOK whose telephone number is (571)272-4276. The examiner can normally be reached 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached at 571-272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRIAN S COOK/Primary Examiner, Art Unit 2187