DETAILED ACTION
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim(s) 1,2,3,5,7,8,9,12,13,14,19,20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by AVIDAN et al. (US 2019/0205667 A1):
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of DeLuca (US 2017/0116660 A1):
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of Hershey et al. (US 2015/0134244 A1):
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of LEIBOVICI et al. (US 2020/0026875 A1) and Patsiokas et al. (US 2015/0271247 A1) and Rutschman et al. (US 2018/0239982 A1):
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of LEIBOVICI et al. (US 2020/0026875 A1) and Patsiokas et al. (US 2015/0271247 A1) and Rutschman et al. (US 2018/0239982 A1) as applied in claim 10 further in view of Khoyi et al. (US 2015/0372807 A1):
Claim(s) 15,17 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of Levinson et al. (US 2019/0295315 A1):
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of Levinson et al. (US 2019/0295315 A1) as applied in claim 15 further in view of Patton et al. (US 10,209,974 B1):
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of Levinson et al. (US 2019/0295315 A1) as applied in claim 15 further in view of Patton et al. (US 10,209,974 B1) as applied in claim 16 further in view of LIU et al. (CN 101719216 A) with SEARCH machine translation:
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
PNG
media_image1.png
717
156
media_image1.png
Greyscale
Step 0: establish broadest reasonable interpretation as shown in the footnotes.
Step 1: Claim 1 is a machine, Claim 15 a process; claim 19 a process;
Step 2A, prong 1:
The claim(s) recite(s) as abstract idea: mental process & math: claim 1, representative of claims 15 and 19:
“obtain at least some of the imagery data and/or representations… analyze the imagery data and/or the representations thereby identifying multiple different types…train…a model”:
1. A system operative to make different types of predictions regarding objects of various categories, comprising:
a plurality of on-road vehicles moving throughout various areas, in which each of the on-road vehicles comprises an onboard imagery sensor operative to capture imagery data of areas surrounding geo-locations visited by the on-road vehicle, in which different objects of various categories appear in the imagery captured;
a server configured to:
obtain at least some of the imagery data and/or representations of the different objects of various categories appearing in the imagery data;
analyze the imagery data and/or the representations thereby identifying multiple different types of interactions among objects of the various categories; and
train, using at least results of said analysis, a model operative to draw multiple types of inferences associated with the multiple different types of interactions, regarding objects of the various categories.
Step 2A, prong 2:
This judicial exception is not integrated into a practical application because claim 1 is not improving the function of a computer in view of applicant’s disclosure:
[0133]In one embodiment, said classification is an improved classification as a result of said increasing of the amount of descriptive information associated with at least some of the objects (e.g., 1-ped-2-des-b1 by itself may be used to determine that 1-ped-2 is a pedestrian, but only when combining 1-ped-2-des-b1 with 1-ped-2-des-c9 it can be determined that 1-prd-2 is a male having certain intensions).
[0149]In one embodiment, said event-descriptions 1-event-2-des-b1, 1-event-2-des-c9, 1-event-1-des-d6 are generated using a technique associated with at least one of: (i) motion detection, (ii) object tracking, (iii) object analysis, (iii) gesture analysis, (iv) behavioral analysis, and (v) machine learning prediction and classification models.
Step 2B:
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements (such as the claimed:
a plurality of on-road vehicles moving throughout various areas, in which each of the on-road vehicles comprises an onboard imagery sensor operative to capture imagery data of areas surrounding geo-locations visited by the on-road vehicle, in which different objects of various categories appear in the imagery captured;
a server configured to:
adhere to the conventional and (“server”) does not need explanation to one of ordinary skill in the art (like 35 USC 112(a)) in view of applicant’s disclosure:
BACKGROUND
[0002]A plurality of on-road vehicles moving along streets of a city, while utilizing onboard cameras for autonomous driving functions, may unintentionally capture images of city objects such as structures and individuals.
[0080]FIG. 1F illustrates one embodiment of a server 99-server receiving from the on-road vehicles 10a, 10b, 10c, 10d, 10e, 10f (FIG. 1D) specific visual records associated with a particular geo-location of interest. Server 99-server may first receive from each f the vehicles a list of visited locations. For example, 10a is reporting being at locations 10-loc-1, 10-loc-2, and 10-loc-3, which is recorded by the server in 1-rec-a. 10b is reporting being at location 10-loc-2, which is recorded by the server in 1-rec-b. 10c is reporting being at location 10-loc-2, which is recorded by the server in 1-rec-c. 10d is reporting being at location 10-loc-3, which is recorded by the server in 1-rec-d. 10e is reporting being at location 10-loc-5, which is recorded by the server in 1-rec-e. 10f is reporting being at location 10-loc-4, which is recorded by the server in 1-rec-f. The server 99-server can then know which of the vehicles posses visual records associated with a specific location. For example, if the server 99-server is interested in imagery data associated with location 10-loc-2, then according to the records in the server, only vehicles 10a, 10b, and 10c have relevant imagery data, and therefore the server may instruct 10a, 10b, and 10c to send the related visual records, following which 10a responds by sending 4-visual-a2, 10b responds by sending 4-visual-b1, and 10c responds by sending 4-visual-c9. The server 99-server, or any other server, may be located in a stationary data center or in conjunction with another stationary location such as an office or a building, or it may be located on-board one of the on-road vehicles, or it may be co-located (distributed) on-board several of the on-road vehicles, unless specifically mentioned otherwise. The server 99-server, or any other server, may be implemented as a single machine or it may be distributed over several machines.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1,2,3,5,7,8,9,12,13,14,19,20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by AVIDAN et al. (US 2019/0205667 A1):
PNG
media_image2.png
717
358
media_image2.png
Greyscale
Re 1., AVIDAN teaches A system operative to make different types of predictions regarding objects of various categories, comprising:
a plurality of on-road vehicles (“particularly with respect to autonomous or semi-autonomous vehicles” [0029] 2nd S) moving throughout various areas, in which each of the on-road vehicles comprises an onboard imagery sensor operative to capture imagery data of areas surrounding geo-locations (“environmental conditions, object types, etc.” [0035] penult S) visited by the on-road vehicle (“for generating or collecting environmental image data (e.g., for processing by the machine learning system 103 and/or computer vision system 105), related geographic data, etc.” [0080]), in which different objects of various (“semantic” [0030] 2nd S) categories appear in the imagery captured;
a server (i.e.. “The synthetic data platform 107, machine learning system 103, and/or computer vision system 105” [0076] 2nd S: fig. 1: internet) configured to:
obtain at least some of the imagery (computer vision) data and/or representations of the different objects of various categories appearing in the imagery data;
analyze1 (i.e., detect23 via “analysis” [0080] last S) the imagery data and/or the representations (via “the computer vision system 105 can detect collisions, dangerous situations (e.g., dangerous overtaking, following too closely, dangerous weaving, etc.) in input image sequences” [0074] 3rd S) thereby identifying multiple different types of interactions among objects of the various categories; and
train, using at least results of said analysis, a model (“for training and evaluating machine learning models (e.g., a machine learning system 103 in combination with a computer vision system 105) to detect the actions in image sequences or videos (e.g., as captured in real-time from camera-equipped vehicles 101).” [0029] last S) operative to draw4 (understood given machine learning5) multiple types of inferences (understood given machine learning) associated with the multiple different types (understood given labels as shown in fig. 8:705 (A) an overtaking class is a different group/cluster than (2) a collision class cluster/group) of interactions (“as custom parameters” [0046] penult S), regarding objects of the various categories.
Re 2., AVIDAN discloses The system of claim 1, wherein
said multiple different types of interaction comprise interactions of individuals with other individuals (via “the presence of other vehicles, pedestrians6, traffic lights, potholes and7 any other objects8, or a combination thereof.” [0081] 2nd S),
in which said analysis includes at least one of:
tracking movement paths of the individuals,
identifying events involving the individuals (“without having to render the actual collisions, near misses, or potential collisions” [0037] 3rd S),
analyzing co-occurrence of the individuals, and
identifying individuals associated with the same organization,
in which said multiple types of inferences comprises inferences regarding social build (networking) aspects9 (as described via “social networking applications10” [0078] penult S) of the individuals.
Re 3., AVIDAN discloses The system of claim 1,
wherein said multiple different types of interaction comprise interactions of (“triangulation” [0081] penult S) organizations1112 with individuals,
in which said analysis includes at least one of:
(A) identifying individuals who frequent locations associated with the organization13 and
(B) identifying individuals (“without having to render the actual collisions, near misses, or potential collisions” [0037] 3rd S) associated with organizational elements (as shown in the system of fig. 1),
in which said multiple types of inferences comprises inferences regarding affiliation aspects (or “feature”- “associated” “characteristics” [0060] 5th S) of the individuals and/or the organization.
Re 5., AVIDAN discloses The system of claim 1,
wherein said multiple different types of interaction comprise interactions of vehicles with individuals,
in which said analysis includes at least one of:
(A) identifying instances (via “identifiable situations14” [0067] 1st A) of individuals entering (in/into) and/or exiting vehicles (via “ more passenger15 vehicles, taxis, smaller delivery trucks” [0050] last S),
(B) analyzing patterns to distinguish passengers from drivers, and
tracking individual movements near vehicles16,
in which said multiple types of inferences comprises inferences regarding
(C) transportation behaviors (“for autonomous driving and other applications beyond the automotive scenario” [0073] 1st S) and/or
(D) needs of
the individuals and/or
the vehicles17.
Re 7., AVIDAN discloses The system of claim 1,
wherein said representations of the different objects comprise at least one of:
(a) image-based representations,
(b) feature-based representations,
(c) classification-based representations,
(d) description-based representations, and
(e) geometrical representations (comprised by a “geometry”-“dataset” [0038] penult S).
Re 8., AVIDAN discloses The system of claim 1, wherein said analysis to establish multiple different types of interactions comprises generating at least one of:
(i) trajectories (“of the objects” [0052] 4th S),
(ii) interaction graphs,
(iii) event sequences,
(iv) behavior sequences, and
(v) interaction profiles.
Re 9., AVIDAN discloses The system of claim 1,
further comprising a plurality of (“CNN detector” [0072] 6th S) computers located respectively onboard the plurality of on-road vehicles,
wherein:
the server is further configured to transmit (via a “a radio band electromagnetic transmitter” [0109]) at least some of the trained model to at least some of the computers; and
said at least some of the computers are configured to utilize the at least some of the trained model transmitted to enhance18 (or “improve” [0069] last S) an ability of the on-road vehicles to
extract (via “detections” [0069] last S) the representations of the different objects from the imagery data and/or
detect the different objects in the imagery data.
Re 12., AVIDAN discloses The system of claim 1,
wherein said objects of various categories comprise at least one of:
(a) individuals,
(b) organizations,
(c) structures,
(d) (“autonomous or semi-autonomous” [0029] 3rd S) vehicles,
(e) devices worn and/or carried by individuals,
(f) trees and/or vegetation,
(g) road and/or hazards, and
(h) infrastructure elements.
Re 13., AVIDAN discloses The system of claim 12,
wherein said analysis to establish the multiple different types of interaction further comprises analyzing19
(A) multiple (&)
(B) different (via “train the machine learning detector of the vehicle 101 for the specifics20” [0071] last S)
aspects (mapped to Markush alternative (B)) of the objects, said (quality) aspects comprising at least one of:
(a) motion dynamics (“and potentially other behaviors” [0073]) of the objects,
(b) motion paths of the objects, and
which other objects and/or events are associated (fig. 4: “OBJECTS INVOLVED”: “VEHICLES”: “PEDESTRAINS”: “OTHER OBJECTS”) with the objects.
Re 14., AVIDAN discloses The system of claim 13, wherein said multiple types of inferences comprise at least one of:
(a) intentions (comprised by “applications21” [0073] last two Ss) of individuals, and
(b) feelings and/or emotions of individuals.
Claim 19 is rejected like claims 1 and 15:
Re 19., AVIDAN discloses A method for making different types of predictions regarding objects of various categories, comprising:
obtaining, in conjunction with a plurality of on-road vehicles traversing an environment, imagery data and/or representations thereof, the imagery data captured by onboard sensors of said vehicles, said imagery data and/or representations thereof encompassing both on-road elements pertinent to vehicle navigation and off-road elements within the sensors' capture range, wherein said off-road elements (fig. 5) include various objects and scenes situated at and beyond the immediate vicinity of roadways;
accumulating said imagery data and/or representations thereof to enable capturing of a multitude of different types of interactions related to at least the off-road objects, said interactions occurring within at least an off-road context22 (fig. 5: the ground surrounding the road); and
training a model in conjunction with the multitude of different types of interactions arising at least from the different off-road objects of various categories and the numerous ways each object category interacts with other object categories as manifested in and using said imagery data and/or representations thereof, thereby making the model operative to draw multiple different types of inferences within at least the off-road context.
Re 20., AVIDAN discloses The method of claim 19,
wherein the multitude of different types of interactions comprises interactions spanning23 (via fig. 5: a ground surrounding road) diverse aspects (or “geographic features24 (e.g., roads, road objects, points of interest, etc.) that can be used to render a 3D rendering of the location in the synthetic image data.” [0045] last S) of the environment, including25
(A) social,
(B) human behavioral,
(C) physical (via “geographic26 features27”), and
(D) structural
aspects (mapped to Markush alternative (C)),
said interactions used to train the model to draw inferences regarding a breadth (via “geographic features (e.g., two-dimensional or three-dimensional features)” [0085] 3rd S) of off-road phenomena across corresponding28
(E) social,
(F) human behavioral,
(G) physical (or “geographic locations” [0035] 3rd S), and
(H) structural
domains (or geographic spaces mapped to Markush alternative (G)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of DeLuca (US 2017/0116660 A1):
PNG
media_image3.png
717
366
media_image3.png
Greyscale
Re 4., AVIDAN teaches The system of claim 1,
wherein said multiple different types of interaction comprise interactions of (“nearby” [0052] bullet 5) structures with (walking) individuals (“In addition to vehicles” [0052]),
in which said analysis includes at least one of:
(A) tracking (via “follow realistic (but still random) trajectories/scenarios” [0059]) individual movement (“through a geographic space” [0003] 2nd S) near (via said “nearby” [0052] bullet 5) structures,
(B) identifying instances (via “identifiable situations29” [0067] 1st A) of individuals entering and/or exiting the structure, and
(C) analyzing (or detecting) the presence (“of other vehicles, pedestrians, traffic lights, potholes and any other objects, or a combination thereof.” [0081] 2nd S) of individuals within the structure30,
in which said multiple types of inferences comprises inferences regarding
(D) dwelling and/or
(E) visitation
patterns (via “Objects derived from the object parameters would populate the scene and interact as defined by the action or situation (for generating labels) or in other patterns” [0056] 3rd S) of
the individuals and/or
the structures.
AVIDAN does not teach the difference of claim 4 of:
dwelling and/or visitation (patterns).
DeLuca teaches the difference of claim 4 of:
dwelling and/or visitation (pattern) (“within the respective retail environment(s)” [0068] last S).
Since AVIDAN teaches a “store”:
[0097] The road/link segments and nodes can be associated with attributes, such as geographic coordinates, street names, address ranges, speed limits, turn restrictions at intersections, and other navigation related attributes, as well as POIs, such as gasoline stations, hotels, restaurants, museums, stadiums, offices, automobile dealerships, auto repair shops, buildings, stores, parks, etc. The geographic database 109 can include data about the POIs and their respective locations in the POI data records 1007. The geographic database 109 can also include data about places, such as cities, towns, or other communities, and other geographic features, such as bodies of water, mountain ranges, etc. Such place or feature data can be part of the POI data records 1007 or can be associated with POIs or POI data records 1007 (such as a data point used for displaying or representing a position of a city).
one of skill in the art of stores can make AVIDAN’s be as DeLuca’s seeing in the change an improved store via DeLuca [0030] last S:
The present subject matter improves gift giving by providing for automated in-store shopper location-based gift idea determination, as described above and in more detail below. As such, improved gift giving experiences may be obtained through use of the present technology.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of Hershey et al. (US 2015/0134244 A1):
PNG
media_image4.png
717
376
media_image4.png
Greyscale
Re 6., AVIDAN teaches The system of claim 1,
wherein said multiple different types of interaction comprise interactions of individuals (or “users” [0069] 2nd S [0073] 1st S) with wearable (via “the user device 115 can support any type of interface to the user (such as ‘wearable’ circuitry”, etc.)”) [0079] penult S) devices (or “the user devices 115” [0076]),
in which said analysis includes at least one of:
(A) identifying instances (via “identifiable situations31” [0067] 1st A) where individuals are
wearing (via said “ ‘wearable’ circuitry” [0079] penult S) and/or
carrying
specific wearable (via said “ ‘wearable’ circuitry” [0079] penult S) devices (or “the user devices 115” [0076]) and
(B) analyzing individual interactions (“as custom parameters” [0046] penult S) with wearable (via said “ ‘wearable’ circuitry” [0079] penult S) devices32,
in which said multiple types of inferences comprises inferences regarding
(C) (“data as a” [0070] 1st S) product and/or
(D) brand
preferences of the individuals.
AVIDAN does not teach the Markush element [(C) and/or (D)] of claim 6:
(C) (product) and/or
(D) brand
preferences.
Hershey teaches the Markush element [(C) and/or (D)] of claim 6:
(C) (product) and/or
(D) brand (or “brand name”-“preferences” [0049])
preferences (mapped to Markush alternative (D))33.
Since AVIDAN teaches navigation (“navigation route” [0045]), one of skill in the art of navigation can make AVIDAN’s be as Hershey’s seeing in the change “As an advantage, data in navigations system can be continuously updated, augmented with additional en-route information, and easily transferred between systems.”, Hershey [0002] last S.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of LEIBOVICI et al. (US 2020/0026875 A1) and Patsiokas et al. (US 2015/0271247 A1) and Rutschman et al. (US 2018/0239982 A1):
PNG
media_image5.png
717
686
media_image5.png
Greyscale
Re 10., AVIDAN teaches The system of claim 1,
wherein said server is a distributed server comprising a plurality of computers located on board at least some of said plurality of on-road vehicles, thereby forming a hyperconvergence computer architecture; and
said plurality of on-road vehicles having a respective computer onboard comprises at least 100,000 (one hundred thousand) vehicles, and
wherein each of said computers has a processing power of at least 10 (ten) Teraflops, resulting in a total aggregated processing power of at least one Exaflop.
AVIDAN does not teach the difference of claim 10 of:
a) distributed (server)…hyperconvergence (computer) architecture…
b) 100,000 (one hundred thousand) vehicles…
c) a processing power of at least 10 (ten) Teraflops, resulting in a total aggregated processing power of at least one Exaflop.
LEIBOVICI teaches the difference a) of claim 10 of:
a) distributed (“computing environment” [0058]: fig. 4) (server) … (“As an option, one or more variations of” [0057] 2nd S) hyperconvergence (computer) architecture (“and functionality of the embodiments described herein”)…
b) 100,000 (one hundred thousand) vehicles…
c) a processing power of at least 10 (ten) Teraflops, resulting in a total aggregated processing power of at least one Exaflop.
Since AVIDAN teaches a server-computer, one of skill in the art of computers can make AVIDAN’s be as LEIBOVICI’s seeing the change “facilitate efficient distribution of certain software components such as applications or services (e.g., micro-services)”, LEIBOVICI [0060] penult S.
AVIDAN of the combination of AVIDAN,LEIBOVICI does not teach the remaining difference of claim 10 of:
b) 100,000 (one hundred thousand) vehicles…
c) a processing power of at least 10 (ten) Teraflops, resulting in a total aggregated processing power of at least one Exaflop.
Patsiokas teaches difference b) of claim 10:
b) (“Once the proper bandwidth allocation for the update files is established, there is no difference in sending the updates to 1,000 versus” [0122]) 100,000 (one hundred thousand) vehicles…
c) a processing power of at least 10 (ten) Teraflops, resulting in a total aggregated processing power of at least one Exaflop.
Since AVIDAN teaches a vehicle, one of skill in the art of vehicles can make AVIDAN’s of the combination of AVIDAN,LEIBOVICI be as Patsiokas’ seeing the change an updated/more accurate vehicle.
AVIDAN of the combination of AVIDAN,LEIBOVICI,Patsiokas does not teach the remaining difference c) of claim 10 of:
c) a processing power of at least 10 (ten) Teraflops, resulting in a total aggregated processing power of at least one Exaflop.
Rutschman teaches the remaining difference c) of claim 10 of:
c) a processing power of at least 10 (ten) Teraflops34 (“such as on the order of twenty teraflops” [0223] last S), resulting in a total aggregated processing power of at least one Exaflop35.
Since AVIDAN of the combination of AVIDAN,LEIBOVICI,Patsiokas teaches a computer, one of skill in the art of computers can make AVIDAN’s of the combination of AVIDAN,LEIBOVICI,Patsiokas be as Rutschman’s seeing in the change “Significant processing power”, Rutschman [0223].
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of LEIBOVICI et al. (US 2020/0026875 A1) and Patsiokas et al. (US 2015/0271247 A1) and Rutschman et al. (US 2018/0239982 A1) as applied in claim 10 further in view of Khoyi et al. (US 2015/0372807 A1):
PNG
media_image6.png
716
798
media_image6.png
Greyscale
Claim 11 is rejected like claim 17:
Re 11., AVIDAN of the combination of AVIDAN,LEIBOVICI,Patsiokas, Rutschman teaches The system of claim 10,
wherein said high processing power is needed (“to generate the synthetic image data” [0062] penult S) to train the model in conjunction with a vast number (or “an almost unlimited amount of samples” [0067] last S) of interaction possibilities (via “possible identifiable situations” [0067] 1st S: i.e., possible “interaction”-“situations” [0046] penult S) arising from the multitude of different objects and object categories present in the environment, and
further in conjunction with imagery data ingestion36 (resulting in fig. 8: “AUTOMATIC LABELS”) exceeding 20 (twenty) Petabytes, in which said power, multitude of interactions, and data (category) ingestion together facilitating multi-domain AI (or “road, route”-“machine learning model” [0038] penult S) operative to draw inferences across multiple diverse types of domains, said domains comprising at least three of:
(a) transportation and/or mobility domains (see claim 17 regarding this Markush alternative “(a)”),
(b) social domains,
(c) infrastructure domains,
(d) organizational domains, and
(e) commercial and/or consumer domains.
AVIDAN of the combination of AVIDAN,LEIBOVICI,Patsiokas, Rutschman does not teach the difference of claim 11 of:
(imagery data ingestion) exceeding 20 (twenty) Petabytes, (in which said power, multitude of interactions, and data ingestion together facilitating multi-domain AI).
Khoyi teaches the difference of claim 11 of:
(imagery data ingestion) exceeding 20 (twenty) Petabytes (via “The world's total effective capacity to communicate information through information networks was 281 petabytes in 1986, 471 petabytes in 1993, 2.2 exabytes in 2000, and 65 exabytes in 2007. It is predicted that the amount of data traffic communicated over the Internet on an annual basis will exceed 667 exabytes after 2014.” [0006]), (in which said power, multitude of interactions, and data ingestion together facilitating multi-domain AI).
Since AVIDAN of the combination of AVIDAN,LEIBOVICI,Patsiokas,Rutschman teaches storage, one of skill in the art of storage can make AVIDAN’s of the combination of AVIDAN,LEIBOVICI,Patsiokas,Rutschman be as Khoyi’s seeing in the change “Systems and methods of data storage and various features and advantageous details thereof”, Khoyi [0028] 1st S.
Claim(s) 15,17 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of Levinson et al. (US 2019/0295315 A1):
PNG
media_image7.png
716
798
media_image7.png
Greyscale
Claim 15 is rejected like claim 1:
Re 15., AVIDAN discloses A method for making different types of predictions regarding objects of various categories, comprising:
obtaining, in conjunction with a plurality of on-road vehicles traversing an environment, imagery data and/or representations of different objects of various categories within the environment (“information” or “data”37) captured (via “a camera/imaging sensor for gathering image data (e.g., the camera sensors may automatically capture road sign information, images of road obstructions, etc. for analysis)” of fig 1:double-ended input/output data/info arrows) over a (training) period (via “of time” [0003] 2nd S) of at least one month by onboard sensors of said vehicles;
accumulating (via “a geographic database” [0085] and “a training database 111”: fig. 1:109,111) said imagery data and/or representations, said period being sufficient (via “This labeled synthetic data can be then be used for training38 or evaluating machine learning models (e.g., CNNs or equivalent) to predict or detect actions or dynamic movements of objects in input image sequences or videos.” [0034] 3rd S) to enable the capture of a multitude (or “semantic categories39” [0030] 2nd S) of different types (via fig. 8: “AUTOMATIC LABELS”) of interactions related to the objects; and
training said model in conjunction with the multitude of different types of interactions arising from the different objects of said various categories and the numerous ways each object category40 interacts with other object categories (i.e., “ “Bicycle will collide with a pedestrian in t seconds”, “dangerous bypassing of one car of another car”, “car zig-zagging in its lane” [0043[) as manifested in and using said imagery data and/or representations, thereby making the model operative to draw (i.e., infer given machine learning) multiple different types of inferences regarding the objects, the multiple different types of inferences associated with the multitude (i.e., a plurality) of different types of interactions used to train the model.
AVIDAN does not teach the difference41 of claim 15 of:
(the environment captured over a period of)42 at least one month (by onboard sensors of said vehicles).
Levinson teaches a problem (“inaccurate maps” [0012] 3rd S) similar to applicant’s and the difference of claim 15 of:
(the environment captured over a period of)43 at least one month (or “over the course of a day, month, or year” [0041] 2nd to last S) (by onboard sensors of said vehicles).
Since AVIDAN teaches data/information capture, one of skill in the art of capturing can make AVIDAN’s be as Levinson’s seeing the change generating an accurate image via Levinson, fig. 3:318: “APPLY BLENDING AND/OR DUPLICATING TO GENERATE AN UPDATED IMAGE”.
Re 17. The method of claim 15,
wherein said drawing of inferences comprises drawing inferences across multiple diverse types of domains, said domains comprising at least three of:
(a) transportation and/or
mobility
domains (or “vehicle”-“movement paths” [0050] bullet “(2)”: two transportation path domains of at least three & “vehicle”-“highway” [0050] last S: 3rd transportation highway domain of at least 3) including:
(i) commuting patterns,
(ii) dwelling and working place association, and
(iii) transportation service usage,
(b) social domains including
understanding and/or predicting
social interactions and/or
group formations,
(c) infrastructure domains including understanding
condition and/or
state
of infrastructure elements and predicting potential failures,
(d) organizational domains including understanding and predicting interactions within
organizations,
employee behavior, and/or
organizational changes, and
(e) commercial and/or
consumer
domains
including analyzing and predicting various aspects of
commercial and/or
consumer
behavior,
comprising shopping patterns44.
Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of Levinson et al. (US 2019/0295315 A1) as applied in claim 15 further in view of Patton et al. (US 10,209,974 B1):
PNG
media_image8.png
716
798
media_image8.png
Greyscale
Re 16., AVIDAN of the combination of AVIDAN,Levinson teaches The method of claim 15,
wherein said training of said model further comprises training on the multitude of different types of interactions arising from the numerous ways individuals interact with other different objects of various categories and/or locations, said ways comprising at least three of:
(a) social interactions, including
proximity and/or
group
formations and/or
co-occurrences across different imagery instances,
to identify
social connections and/or
relationships,
(b) interactions with locations of interest, comprising:
(i) individuals who frequent locations associated with specific organizations, indicating
potential employment and/or
membership,
(ii) residence of individuals based on their
movement patterns and/or
frequent nighttime locations, and
(iii) recurring travel patterns between specific locations to infer commuting
routes and/or
habits,
(c) interactions associated with individuals
(c1) wearing (via “ ‘wearable’ ”-“user device 115” [0079] penult S or “devices 115” [0076] last S) and/or
(c2) carrying (via “ ‘wearable45’ ”-“user device 115” [0079] penult S or “devices 115” [0076] last S)
specific wearable
(c3) items (via “ ‘wearable46’ ”-“user device 115” [0079] penult S or “devices 115” [0076] last S)
and/or
devices (mapped to Markush alternatives (c1) & (c2) & (c3): 3 of 3),
(d) shopping activities, comprising
entering stores,
carrying shopping bags, and/or
interacting with products, and
(e) interactions associated with
presence and/or
actions
of individuals within the context of specific general off-road events; and
the method further comprises deploying said trained model into a production (via simulated) environment (“based on the user defined parameters indicating the action, objects, etc. to be simulated.” [0055] last S), wherein said model is operative to make inferences regarding individuals based on the previous interactions learned (via “interactions”-“training sets” [0047] 2nd S).
AVIDAN of the combination of AVIDAN,Levinson does not teach the difference of claim 16 of:
deploying (said trained model).
Patton teaches the difference of claim 16 of:
deploying (“to automatically generate (e.g., train), test, and deploy new models into the production environment.”, c. 2,ll. 40-45) (said trained model).
Since AVIDAN of the combination of AVIDAN,Levinson teaches a model, one of skill in the art of models can make AVIDAN’s of the combination of AVIDAN,Levinson be as Patton’s seeing in the change “candidate event detection models that satisfy deployment conditions into the production environment for use with real-world data”, Patton, c. 2,ll. 45-50.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over AVIDAN et al. (US 2019/0205667 A1) in view of Levinson et al. (US 2019/0295315 A1) as applied in claim 15 further in view of Patton et al. (US 10,209,974 B1) as applied in claim 16 further in view of LIU et al. (CN 101719216 A) with SEARCH machine translation:
PNG
media_image9.png
716
798
media_image9.png
Greyscale
Claim 18 is rejected like claim 16:
Re 18., AVIDAN of the combination of AVIDAN,Levinson,Patton teaches The method of claim 15,
wherein said drawing of inferences further comprises drawing inferences regarding
(A) complex concepts and/or
(B) abstractions (“of a rendering of the one or more objects, the geographic space, or other objects in the computer-generated image sequence” [0062] 1st S), including
those related to human behavior and/or
emotions and/or
intentions,
said inferences comprising at least one of:
(a) inferring emotional states associated with visual cues, including
facial expressions and/or
body language,
(b) inferring
intentions and/or
goals (or at least one target/goal via “a target generalizability of the machine learning model” [0062] 1st S or another goal via “training or evaluating a machine learning model to detect the at least one action” [0003] last S )
of individuals associated with sequences of
actions and/or
interactions,
(c) inferring relationships between individuals based on their
interactions and/or
co-occurrence
patterns, and
(d) inferring brand
affinities and/or
preferences,
by associating individuals with
specific brands and/or
products
indicating preferences and potential purchasing behavior47; and
the method further comprises deploying said trained model into a production environment, wherein said model is operative to infer
human behavior and/or
emotions and/or
intentions.
AVIDAN of the combination of AVIDAN,Levinson,Patton does not teach the difference of claim 18 of:
(said model is operative to) infer
human
behavior and/or
emotions and/or
intentions.
LIU teaches the difference of claim 18 of:
(said model is operative to) infer
human
behavior (“through analysis of the motion area”, pg. 5 [0020], 1st txt blk) and/or
emotions and/or
intentions.
Since AVIDAN of the combination of AVIDAN,Levinson,Patton teaches behavior, one of skill in the art of behaviors can make AVIDAN of the combination of AVIDAN,Levinson,Patton be as LIU’s seeing in the change, via Liu, pg. 7, 1st txt blk:
“the calculation process is shortened, which improves the efficiency and robustness of the detection algorithm. The invention also uses the HSV model the suspected shadow pixel value adding parameter learning of mixed Gaussian shadow model so as to accurately judge whether the suspected shadow is a real shadow, and reducing error detecting and improves the identification accuracy. improved and fusion of several methods, the invention effectively solves the easily modelled, simple in algorithm, detecting accurately the technical problems and realize a template matching-based higher detection rate of moving human abnormal behaviour identification method.”
Conclusion
The prior art “nearest to the subject matter defined in the claims” (MPEP 707.05) made of record and not relied upon is considered pertinent to applicant's disclosure.
The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action.
Citation
Relevance
Sutardja (US 2007/0088490 A1)
Sutardja teaches [0161]:
“In this example, the average traffic level during the period may be equal to 100,000 vehicles.”
as the closest to the claimed “100,000 (one hundred thousand) vehicles” of claim 10.
Jacobsen (US 2008/0247663 A1)
Jacobsen teaches [0003]:
“According to a recent study, more than 100 billion photographs are taken each year. To store them all digitally would require 500 petabytes of storage.”
as the closest to the claimed “20 (twenty) Petabytes” of claim 11.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENNIS ROSARIO whose telephone number is (571)272-7397. The examiner can normally be reached Monday-Friday, 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENNIS ROSARIO/Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
1 analyze: to separate (a material or abstract entity) into constituent parts or elements; determine the elements or essential features of (synthesize ). (Dictionary.com)
2 detect: to find out the true character or activity of, wherein find is defined: to ascertain by study or calculation, wherein ascertain is defined: to find out definitely; learn with certainty or assurance; determine, wherein true is defined: being or reflecting the essential or genuine character of something, wherein character is defined: one such feature or trait; characteristic. (Dictionary.com)
3 analyze and detect are identities: they mean the same thing
4 draw: to deduce; infer (Dictionary.com)
5 machine learning: Computers, Digital Technology. the capacity of a computer to process and evaluate data beyond programmed algorithms, through contextualized inference (often used attributively). (Dictionary.com)
6 pedestrian: a person who goes or travels on foot; walker, wherein person is defined: a human being, whether an adult or child, wherein human being is defined: any individual of the genus Homo, especially a member of the species Homo sapiens. (Dictionary.com)
7 and: (used to connect grammatically coordinate words, phrases, or clauses) along or together with; as well as; in addition to; besides; also; moreover. (Dictionary.com)
8 object: a thing, person, or matter to which thought or action is directed, wherein person is defined: a human being, whether an adult or child, wherein human being is defined: any individual of the genus Homo, especially a member of the species Homo sapiens. (Dictionary.com)
9 aspect:
2 nature; quality; character;
3 a way in which a thing may be viewed or regarded; interpretation; view.
4 part; feature; phase. (Dictionary.com)
10 application: short for application program applications package, wherein application program is defined: a computer program that is written and designed for a specific need or purpose, wherein purpose is defined: a fixed design, outcome, or idea that is the object of an action or other effort, wherein idea is defined: the characterization of something in general terms; concept, wherein characterization is defined: description of character, traits, etc, wherein character is defined: the combination of traits and qualities distinguishing the individual nature of a person or thing, wherein quality is defined: a distinguishing characteristic, property, or attribute, wherein property is defined: a quality, attribute, or distinctive feature of anything, esp a characteristic attribute such as the density or strength of a material, wherein feature is defined: a prominent or distinctive part or aspect, as of a landscape, building, book, etc (Dictionary.com)
11 CLAIM SCOPE via applicant’s disclosure:
[0402]Certain features of the embodiments/cases, which may have been, for clarity, described in the context of separate embodiments/cases, may also be provided in various combinations in a single embodiment/case. Conversely, various features of the embodiments/cases, which may have been, for brevity, described in the context of a single embodiment/case, may also be provided separately or in any suitable sub-combination. The embodiments/cases are not limited in their applications to the details of the order or sequence of steps of operation of methods, or to details of implementation of devices, set in the description, drawings, or examples. In addition, individual blocks illustrated in the figures may be functional in nature and do not necessarily correspond to discrete hardware elements. While the methods disclosed herein have been described and shown with reference to particular steps performed in a particular order, it is understood that these steps may be combined, sub-divided, or reordered to form an equivalent method without departing from the teachings of the embodiments/cases. Accordingly, unless specifically indicated herein, the order and grouping of the steps is not a limitation of the embodiments/cases. Embodiments/cases described in conjunction with specific examples are presented by way of example, and not limitation. Moreover, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and scope of the appended claims and their equivalents.
wherein scope is defined: Linguistics, Logic. the range of words (such as claim 1, line 1:“A system” or claim 1, line 7: “server”) or elements of an expression (claims 1 and 3) over which a modifier (or patent examiner) or operator (or me) has control. (Dictionary.com): --organization system-- & --organization server--.
12 organization: order or system; method, wherein system is defined: a coordinated body of methods or a scheme or plan of procedure; organizational scheme. (Dictionary.com)
13 Since Markush alternative (B) is taught the Markush element [(A) and (B)] is taught.
14 situation: condition; case; plight, wherein condition is defined: a particular mode of being of a person or thing; existing state; situation with respect to circumstances, wherein circumstance is defined: an incident or occurrence, wherein occurrence is defined: something that happens; event; incident. (Dictionary.com)
15 passenger: a person who is traveling in an automobile, bus, train, airplane, or other conveyance, especially one who is not the driver, pilot, or the like, wherein in is defined: (used to indicate inclusion within space, a place, or limits), wherein within is defined: in or into the interior or inner part; inside, wherein into is defined: (used to indicate entry, inclusion, or introduction in a place or condition). (Dictionary.com)
16 Given Markush alternative (A) is taught, the Markush element [(A),(B)] is taught
17 Given Markush alternative (C) is taught, the Markush element [(C) and/or (D)] is taught
18 BROAD CLAIM LANGIAGE: enhance: (tr) to intensify or increase in quality, value, power, etc; improve; augment (Dictionary.com)
19 Markush element of coordinate-adjective Markush alternatives follows: [(A) & (B)]=[(B) & (A)]: no difference in meaning when swapped
20 specific: something specific, as a statement, quality, detail, etc. (Dictionary.com)
21 application: short for application program applications package, wherein application program is defined: a computer program that is written and designed for a specific need or purpose, wherein purpose is defined: fixed intention in doing something; determination (Dictionary.com)
22 BROAD CLAIM LANGUAGE: context: the set of circumstances or facts that surround a particular event, situation, etc., wherein etc. is defined: and others; and so forth; and so on (used to indicate that more of the same sort or class might have been mentioned, but for brevity have been omitted), wherein so is defined: likewise or correspondingly; also; too, wherein forth is defined: out, as from concealment or inaction; into view or consideration : and likewise into consideration (Dictionary.com)
23 BROAD CLAIM LANGUAGE: to extend over or across (a section of land, a river, etc.). (Dictionary.com)
24 feature: a prominent or distinctive part or aspect, as of a landscape, building, book, etc (Dictionary.com)
25 Markush element follows
26 geographical: of or relating to the natural features, population, industries, etc., of a region or regions, wherein natural is defined: having a real or physical existence, as opposed to one that is spiritual, intellectual, fictitious, etc. (Dictionary.com)
27 feature: a prominent or distinctive part or aspect, as of a landscape, building, book, etc (Dictionary.com)
28 another Markush element follows
29 situation: condition; case; plight, wherein condition is defined: a particular mode of being of a person or thing; existing state; situation with respect to circumstances, wherein circumstance is defined: an incident or occurrence, wherein occurrence is defined: something that happens; event; incident. (Dictionary.com)
30 Since Markush alternative (A) is taught, the Markush element [(A),(B) and (C)] is taught.
31 situation: condition; case; plight, wherein condition is defined: a particular mode of being of a person or thing; existing state; situation with respect to circumstances, wherein circumstance is defined: an incident or occurrence, wherein occurrence is defined: something that happens; event; incident. (Dictionary.com)
32 Since Markush alternative (A) is taught the Markush element [(A) and (B)] is taught
33 Since Markush alternative (D) is taught, the Markush element is taught.
34 teraflop: A measure of computing speed equal to one trillion floating-point operations per second, wherein trillion is defined: a cardinal number represented in the U.S. by 1 followed by 12 zeros, and in Great Britain by 1 followed by 18 zeros. (Dictionary.com)
35 exa-: a combining form used in the names of units of measure equal to one quintillion (1018 ) of a given base unit, wherein quintillion is defined: a cardinal number represented in the U.S. by 1 followed by 18 zeros, and in Great Britain by 1 followed by 30 zeros. (Dictionary.com)
36 ingest: to take, as food, into the body (egest ), wherein body is defined: a collective group, wherein group is defined: any collection or assemblage of persons or things; cluster; aggregation, wherein cluster is defined: a number of things of the same kind, growing or held together; a bunch, wherein kind is defined: a class or group of individual objects, people, animals, etc., of the same nature or character, or classified together because they have traits in common; category. (Dictionary.com)
37 too numerous to cite
38 train: to give the discipline and instruction, drill, practice, etc., designed to impart proficiency or efficiency, wherein efficiency is defined: the quality or state of being efficient; competence; effectiveness, wherein efficient is defined: functioning or producing effectively and with the least waste of effort; competent, wherein competent is defined: suitable or sufficient for the purpose (Dictionary.com)
39 “categories” is understood to be plural: pertaining to or involving a plurality of persons or things, wherein plurality is defined: a large number; multitude. (Dictionary.com)
40 BROAD CLAIM LANGUAGE: category: a class or group of things, people, etc, possessing some quality or qualities in common; a division in a system of classification (Dictionary.com)
41 THE CLAIMED INVENTION AS A WHOLE regarding “at least one month”:
The problem is “inaccuracy” (it looks like inaccurate corner points in figures 7A-7D) via applicant’s disclosure:
[0198]In one embodiment, each of the data interfaces is configured to: (i) use the respective three-dimensional mapping configuration to create a three-dimensional representation of areas surrounding locations visited by the respective autonomous on-road vehicle, in which said three-dimensional representation comprises an inherent inaccuracy (e.g., 5-inter-i uses 4-lidar-i to create a 3D representation of objects in 1-GEO-AREA, such as a 3D representation 3-3D-i1, 3-3D-i2, 3-3D-i3, 3-3D-i4, 3-3D-i5, 3-3D-i6, FIG. 7B, of object 1-object-3. 5-inter-j uses 4-lidar-j to create a 3D representation of objects in 1-GEO-AREA, such as a 3D representation 3-3D-j1, 3-3D-j2, 3-3D-j3, 3-3D-j4, 3-3D-j5, 3-3D-j6, FIG. 7C, of object 1-object-3. 5-inter-k uses 4-lidar-k to create a 3D representation of objects in 1-GEO-AREA, such as a 3D representation 3-3D-k1, 3-3D-k2, 3-3D-k3, 3-3D-k4, 3-3D-k5, 3-3D-k6, FIG. 7D, of object 1-object-3), and (ii) send said three-dimensional representation 3-3D-i, 3-3D-j, 3-3D-k to the sever 94-server; the server 94-server is configured to receive said plurality of three-dimensional representations 3-3D-i, 3-3D-j, 3-3D-k respectively from the plurality of autonomous on-road vehicles 10i, 10j, 10k, in which the plurality of three-dimensional representations comprises respectively the plurality of inherent inaccuracies; and the server 94-server is further configured to fuse said plurality of three-dimensional representations 3-3D-i, 3-3D-j, 3-3D-k into a single fused three-dimensional representation 3-3D-fuse (FIG. 7E) using at least one data combining technique, in which said single fused three-dimensional representation 3-3D-fuse (3-3D-fuse1, 3-3D-fuse2, 3-3D-fuse3, 3-3D-fuse4, 3-3D-fuse5, 3-3D-fuse6) comprises a new level of inaccuracy that is lower than said inherent inaccuracies as a result of said data combining technique. For example, the geo-spatial coordinate of the upper-front-right vertex of object 1-object-3 is perceived by vehicle 10i as being 3-3D-i3. The geo-spatial coordinate of the same upper-front-right vertex of object 1-object-3 is perceived by vehicle 10j as being 3-3D-j3. The geo-spatial coordinate of yet the same upper-front-right vertex of object 1-object-3 is perceived by vehicle 10k as being 3-3D-k3. Now, since 3-3D-i3, 3-3D-j3, 3-3D-k3 are all inaccurate, the server 94-server fuses the coordinates 3-3D-i3, 3-3D-j3, 3-3D-k3 into a more accurate coordinate 3-3D-fuse3 of the upper-front-right vertex of object 1-object-3.
The solution to the “inaccuracy” problem looks like fusion of these corner points (fig. 7E).
The claimed “at least one month” does not appear in the disclosure solution [0198]: an indication of obviousness.
42 (italics) represent claim limitations already taught
43 (italics) represent claim limitations already taught
44 since Markush alternative (a) is taught, the Markush element [(a) (b) (c) (d) (e)] is taught.
45 wearable: capable of being worn; appropriate, suitable, or ready for wearing, where wear is defined:
to carry or have on the body or about the person as a covering, equipment, ornament, or the like. (Dictionary.com)
46 wearable: Digital Technology. relating to or noting a computer or advanced electronic device that is incorporated into an accessory worn on the body or an item of clothing, wherein where wear is defined:
to carry or have on the body or about the person as a covering, equipment, ornament, or the like. (Dictionary.com)
47 Since Markush alternative (b) is taught, the Markush element [(a) (b) (c) (d)] is taught.