Reusing PDE in modeling context

If you’re developing with EMF it is a common task to separate the model into multiple files along with enabling the user to create crosslinks between them. This works out-of box if the files are located inside the same project. However, in some cases for larger models it is simply not enough. It can be useful to be able to define reusable (and versioned) packets of models and enable the user to define a configuration which determines the imported model packages. Sounds familiar? The same functionality exists in PDE. Maybe it can be reused for non-java purposes.

Plug-ins as model containers

Creating plug-in projects to contain models is easy with PDE: not even a single line of code is needed. The user can create a non-java plug-in project, or convert a resource project to plug-in project by clicking on ‘Configure/Convert to Plug-in projects’ in the pop-up menu of the project.

It’s finished: the user now can define dependencies between projects and even install/update third party plug-ins from update sites!

Creating cross-links

Of course, if the user wants to use the models from the dependent plug-ins we must show the contents of these models on the GUI. This step is highly domain-specific, therefore I leave it to the reader. The part of determining the visible resources from the imported plug-ins is common for all uses.

To use the dependent models, we need to load not only the directly referenced plug-ins, but the plug-ins needed by them recursively:

/**
 * Collect all plug-ins on which the given plug-in depends.
 * @param name
 * @return
 */
public static List<String> collectAllDependencies(String name){
    Set<String> all = new HashSet<String>();

    Queue>IPlugin< process = new LinkedList<IPlugin>();
    process.add(getPlugin(name));

    while(!process.isEmpty()){
        IPlugin plugin = process.poll();
        all.add(plugin.getId());
        for(IPluginImport pi : plugin.getImports()){
            String imported = pi.getId();
            if (!all.contains(imported)){
                process.add(getPlugin(imported));
            }
        }
    }

    return new ArrayList<String>(all);
}

public static IPlugin getPlugin(String name){
    IPluginModelBase mb = PluginRegistry.findModel(name);
    IPluginModel m = (IPluginModel)mb;
    return m.getPlugin();
}

When we got all plug-ins to use, we just need to collect the resources from them. In this step we must prepare for two cases: whether the plug-in is a workspace plug-in or not. A workspace plug-in is a plug-in in development. There is a project in the workspace which contain the plug-in contents. Unlike installed plug-ins which exists in the plugins directory of the eclipse installation:

public static Collection<URI> getVisibleResources(String pluginname) throws CoreException{
    IPlugin plugin = getPlugin(pluginname);
    final Collection<URI> result = new ArrayList<URI>();

    //The plugin has an underlying resource if it is a workspace plug-in
    IResource r = plugin.getPluginModel().getUnderlyingResource();
    if (r != null){
        r = r.getProject();
        r.accept(new IResourceVisitor() {

            @Override
            public boolean visit(IResource resource) throws CoreException {
                if (resource instanceof IFile){
                    result.add(URI.createPlatformResourceURI(resource.getFullPath().toString(), true));
                    return false;
                }
                return true;
            }
        });
    }else{
        Bundle b = Platform.getBundle(plugin.getId());
        Enumeration<URL> urls = b.findEntries("/", "*.e", true);
        while(urls.hasMoreElements()){
            URL url = urls.nextElement();
            URI uri = URI.createPlatformPluginURI(pluginname+url.getPath(), true);
            result.add(uri);
        }
    }

    return result;

}

As for normal eclipse plug-ins, if a workspace plug-in exists with the same name as an installed plug-in, the workspace version will be used.

Because EMF needs all resources to be loaded into one resource set to resolve crosslinks, it is advised to use some kind of indexer to reduce load times.

Special case: Xtext

If you’re using this method with Xtext, some things will become much simpler as it provides you some additional infrastructure compared to EMF. First, you won’t need to write your own indexing service as Xtext does everything for you from caching to lazy linking. Second, you can implement your own builders as builder participants, which leverages the problem of creating and configuring resource sets and loading resources.

The easiest way to use PDE functionality is to implement the possible crosslinks as scopes:

public class PluginDependencyScope extends AbstractScope {

    private final List<IEObjectDescription> descs = new ArrayList<IEObjectDescription>();

    /**
     * @param parent
     * @param ignoreCase
     */
    public PluginDependencyScope(URI context,ResourceSet resourceset, IScope parent) {
        super(parent, false);
        String projname = context.trimFragment().segment(1);
        List<String> deps = MODembedCore.collectAllDependencies(projname);

        for(String d : deps){
            try {
                for(URI uri : MODembedCore.getVisibleResources(d)){
                    try{
                        Resource r = resourceset.getResource(uri, true);
                        for(EObject eo : r.getContents()){
                            if (eo instanceof Package){
                                String name =1;
                                descs.add(EObjectDescription.create(qname, eo));
                            }
                        }
                    }catch(Exception e){

                    }
                }
            } catch (CoreException e) {

            }
        }
    }

    /* (non-Javadoc)
     * @see org.eclipse.xtext.scoping.impl.AbstractScope#getAllLocalElements()
     */
    @Override
    protected Iterable<IEObjectDescription> getAllLocalElements() {
        return descs;
    }

}

Problems

This does the magic, but it’s not perfect. There are some major flaws which are not easy to deal with:

  • Depending on PDE pulls the entire PDE+JDT into your product even if your user does not intend to use it. This is a minor problem, it just causes overhead on the disk usage.
  • PDE is defined mainly for Java Plug-in projects. The whole user interface contains items which are meaningless in our special case.
  • If the user is not familiar with PDE and OSGi concepts, she/he will have a hard time understanding the user interface and the operation of cross-link resolution.
  • When the user tries to add a plug-in to the dependency list, all installed plug-ins are listed, including platform, PDE, JDT, EMF and other components. It may be hard to tell which plug-in contain models of a specific domain.

Conclusion

Functionally, PDE contains everything you will need for partitioning models. If the targeted user base is experienced with eclipse, reusing PDE is an option. If needed, some isses can be worked out with some effort. For example the MANIFEST.MF editor can be replaced with a more domain-specific editor, which can filter out uninteresting plug-ins from the dependency list and include additional information which may needed by your domain.

I’m using this technique to share some common libraries to users for my own domain-specific language. It’s better than telling the user to download some files and copy them to the workspace. Let me know if you found better ways.

  1. Package) eo).getName(); QualifiedName qname = QualifiedName.create(name.split("\\." []

3 thoughts on “Reusing PDE in modeling context”

  1. Interesting idea to reuse the plug-in dependencies, but I’m not convinced that it is a good idea.

    Unless your project type is strictly PDE-related (EMF is not!), introducing PDE as a dependency seems like a flaky idea. E.g. PDE provides a lot of UI elements that needs to be filtered out. Even worse, by providing a small part of PDE we might prohibit a later installation of a more recent version of PDE (I experienced such problems lately, and it was ugly).

    On the other hand, there is a lesser known, similar functionality, called Referenced projects, that is defined on any Eclipse project. That could be used as well, and thus we wouldn’t introduce unnecessary dependencies, and maintain genericity.

    I know of the Java/JDT dependencies of Xtext (more specifically Xbase) – in that case, as Xtext is positioned as an extension of a Java classpath, that dependency is more realistic (however, it is not always needed, and can be circumvented, so the result stays general).

    [OFF]
    Btw, in case of Xtext only the letter X should be capitalized, XText is an incorrect spelling. I updated the post to reflect this.
    [/OFF]

  2. Yes, I must agree with everything you say, I know about the problems. I also know about the “referenced projects” feature, but it lacks a lot of functionality which makes PDE so powerful (version handling, using models which are not located in the workspace but from installation – like a common library, etc..).

    The complete solution would be a PDE-like feature for EMF projects with similar functionality, but in a domain specific way. It is just easier to reuse existing functionality than to write my own code.

  3. Oh, the evergreen question of to-rollyourown-or-nottorollyourown implementation. 😀 Because I don’t like the mentioned approach, I try to outline, what shall be implemented when choosing the referenced projects direction…

    For being domain-specific, I see only a single way: to provide an expression language (or API), that helps deciding whether a selected project provides something needed. When these expression language is present, then the resulting project type could either be generated, or the defining model interpreted.

    Luckily, the Core Expressions could be extended to incorporate such concepts (on the other hand, those expressions are a hell to write and debug 🙁 ). But they (1) exist, (2) are defined on Resources and (3) can be extended using various property testers easily (already done it 🙂 ).

    Otherwise, creating a project type (or nature) that supports versioning is comparatively easy (and maybe the manifest file could be hijacked when it exists 🙂 to reuse the existing functionality). And then such an API would be easy to provide you described in this post.

Leave a Reply